Sunday, June 26, 2022

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


How to Store the Output of a for Loop to a Variable

Posted: 26 Jun 2022 07:09 PM PDT

I have the following shell code:

for value in 10 5 27 33 14  25  do        echo  $value  done  

But what if I want to manipulate the output later? I want to set that code to a variable. Is this possible?

Xmonad Stealing Keymap for Neovim Toggleterm

Posted: 26 Jun 2022 06:58 PM PDT

So when I am using Neovim, I have a plugin that when I hit <Ctr>+ \ it will open a terminal in Neovim. It seems like every once in awhile it stops working and not sure if the issue is in Neovim or if Xmonad is stealing that key. Has anyone had an issue like this, if so how did they solve?

Old Linux rejects my ssh id_rsa key from newly installed windows

Posted: 26 Jun 2022 06:54 PM PDT

I have been maintaining an old Linux server (CentOS 6.5) for long term. I access that Linux server by ssh with 'pub key auth'.

Now I just bought a new Windows (win10 or 11 not sure) laptop and installed 'Git for win 2.33', when I try to ssh from the new lap top as usual, I got:

$  ssh -i ~/.ssh/id_rsa.bridge_to_home -p 5122  -vv shaozr@{ip addr}    OpenSSH_8.8p1, OpenSSL 1.1.1m  14 Dec 2021    debug1: Reading configuration data /etc/ssh/ssh_config    debug2: resolve_canonicalize: hostname 27.115.62.170 is address    debug1: Connecting to 27.115.62.170 [27.115.62.170] port 5122.    debug1: Connection established.    debug1: identity file /c/Users/43141/.ssh/id_rsa.bridge_to_home type -1    debug1: identity file /c/Users/43141/.ssh/id_rsa.bridge_to_home-cert type -1    debug1: Local version string SSH-2.0-OpenSSH_8.8    debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3    debug1: compat_banner: match: OpenSSH_5.3 pat OpenSSH_5* compat 0x0c000002    debug2: fd 4 setting O_NONBLOCK    debug1: Authenticating to 27.115.62.170:5122 as 'shaozr'    debug1: load_hostkeys: fopen /c/Users/43141/.ssh/known_hosts: No such file or directory    debug1: load_hostkeys: fopen /c/Users/43141/.ssh/known_hosts2: No such file or directory    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory    debug1: SSH2_MSG_KEXINIT sent    debug1: SSH2_MSG_KEXINIT received    debug2: local client KEXINIT proposal    debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c    debug2: host key algorithms: ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ssh-ed25519@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,rsa-sha2-512,rsa-sha2-256    debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,aes256-cbc,aes192-cbc    debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,aes256-cbc,aes192-cbc    debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1    debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1    debug2: compression ctos: none,zlib@openssh.com,zlib    debug2: compression stoc: none,zlib@openssh.com,zlib    debug2: languages ctos:    debug2: languages stoc:    debug2: first_kex_follows 0    debug2: reserved 0    debug2: peer server KEXINIT proposal    debug2: KEX algorithms: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1    debug2: host key algorithms: ssh-rsa,ssh-dss    debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se    debug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se    debug2: MACs ctos: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96    debug2: MACs stoc: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96    debug2: compression ctos: none,zlib@openssh.com    debug2: compression stoc: none,zlib@openssh.com    debug2: languages ctos:    debug2: languages stoc:    debug2: first_kex_follows 0    debug2: reserved 0    debug1: kex: algorithm: diffie-hellman-group-exchange-sha256    debug1: kex: host key algorithm: (no match)    Unable to negotiate with 27.115.62.170 port 5122: no matching host key type found. Their offer: ssh-rsa,ssh-dss  

This is weird.

I can still ssh to that linux from my old PC, and I can git clone via ssh (to famous git repo provider) from my new laptop.

It seems that both sides are 'ssh OK', but why the CentOS6.6 rejects my id_ras key from 'Git for win 2.33' ?

App name still exist on default application menu after I uninstall it

Posted: 26 Jun 2022 05:32 PM PDT

I use Manjaro Linux xfce edition. I have uninstalled brave-browser from my system. But its entry still exist on my Settings -> Default Applications -> Web Browser. When I open it, "Brave" and its logo appear. How can I remove that entry?

How to set up up group specific folders in linux

Posted: 26 Jun 2022 05:56 PM PDT

I'm trying to create a segregated workspace for multiple groups, each group member should only be able to read, write and view their associated shared folder.

I've created 2 user groups groupATeam and groupBTeam to handle the permissions of users. I've also assigned the group permissions to the relevant project folders groupA and groupB.

#Check project folder permissions  admin@computer:/folder/data$ ls -al /folder/data | grep groupA  drwsrws--x 2 root groupATeam 4096 Jun 24 11:56 groupA  admin@computer:/folder/data$ ls -al /folder/data | grep groupB  drwsrws--- 2 root groupBTeam   4096 Jun 24 11:38 groupB  

For the admin user who is in both groups, I can access both folders and subsequently read and write without issue.

#Check groups  admin@computer:/folder/data$ getent group groupATeam  groupATeam:x:1009:worker_3,worker_4,admin  admin@computer:/folder/data$ getent group groupBTeam  groupBTeam:x:1008:worker_1,worker_2,admin    #Check admin can access and write to groupA folder  admin@computer:/folder/data$ cd groupA/  admin@computer:/folder/data/groupA$ ls  test_file.txt    admin@computer:/folder/data/groupA$ cd ..    #Check admin can access groupB folder   admin@computer:/folder/data$ cd groupB/  admin@computer:/folder/data/groupB$ ls  test_file.txt  

People in the groupA also seem to have the correct permissions, being able to access, read and write to their folder but not groupBs folder.

# Worker 3 is part of groupA team and therefore should only be able to interact with groupA folder but not groupB  worker_3@computer:~$ cd /folder/data/groupA/  worker_3@computer:/folder/data/groupA$ touch test_file101.txt  worker_3@computer:/folder/data/groupA$ ls  test_file.txt  test_file101.txt   worker_3@computer:/folder/data/groupA$ vim test_file.txt    #Check non group member can acccess restricted groupB folder  worker_3@computer:~$ cd /folder/data/groupB/  bash: cd: /folder/data/groupB/: Permission denied  # This is the correct behaviour I'm looking for  

The issue seems to be with users of the groupBTeam.

# Worker 1 is part of groupB team and therefore should only be able to interact with groupB folder but not groupA  worker_1@computer:/folder/data$ cd groupB/  worker_1@computer:/folder/data/groupB$ ls  test_file.txt    worker_1@computer:/folder/data/groupB$ touch test_file101.txt  worker_1@computer:/folder/data/groupB$ ls  test_file.txt  test_file101.txt     worker_1@computer:~$ cd /folder/data/groupA/    #This shouldn't work  worker_1@computer:/folder/data/groupA$ ls  ls: cannot open directory '.': Permission denied  worker_1@computer:/folder/data/groupA$ cd ..    # Incorrect behavior, I can access the groupA folder even though worker_1 isn't part of   # this group  

Members of groupBTeam can access groupA folder, which isnt the desired behavior.

Can anyone explain why I'm not getting the expected behaviour and how I can rectify it?

Fore refence, I followed these steps to set up the groups and folder permissions - https://www.tutorialspoint.com/how-to-create-a-shared-directory-for-all-users-in-linux

Suddenly can't ping gateway: RedHat Linux 4.6.3-2

Posted: 26 Jun 2022 04:46 PM PDT

I have an old RedHat Linux server that suddenly stopped being able to ping/access my gateway. It has been working for years.

It does not seem to be a hardware problem. The server can ping every other device on my local network except the gateway. Every other device on my network can ping the gateway and the server. The server can't ping any site out on the internet even if I use an IP address instead of a name. Every other device on my network can ping an arbitrary internet site.

Attempting to ping the gateway from the server does not yield any error, it just times out.

netstat -nr

Kernel IP routing table

Destination Gateway Genmask Flags MSS Window irtt Iface

0.0.0.0 192.1.1.250 0.0.0.0 UG 0 0 0 p33p1

192.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p33p1


ifconfig -a

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:404 errors:0 dropped:0 overruns:0 frame:0          TX packets:404 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:33724 (32.9 KiB)  TX bytes:33724 (32.9 KiB)  

p33p1 Link encap:Ethernet HWaddr 2C:27:D7:33:6D:9E

      inet addr:192.1.1.8  Bcast:192.1.1.255  Mask:255.255.255.0          inet6 addr: 2002:c0a8:101:0:2e27:d7ff:fe33:6d9e/64 Scope:Global          inet6 addr: fe80::2e27:d7ff:fe33:6d9e/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:19261 errors:0 dropped:0 overruns:0 frame:0          TX packets:5875 errors:0 dropped:0 overruns:0 carrier:0        collisions:0 txqueuelen:1000           RX bytes:1443523 (1.3 MiB)  TX bytes:637244 (622.3 KiB)        Interrupt:44 Base address:0xe000   

wlan0 Link encap:Ethernet HWaddr D0:DF:9A:78:27:90

      UP BROADCAST MULTICAST  MTU:1500  Metric:1          RX packets:0 errors:0 dropped:0 overruns:0 frame:0          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)  

Note that ifconfig shows lot of TX and RX packets with no errors.


ethtool p33p1

Settings for p33p1:

Supported ports: [ TP MII ]    Supported link modes:   10baseT/Half 10baseT/Full                             100baseT/Half 100baseT/Full     Supported pause frame use: No    Supports auto-negotiation: Yes    Advertised link modes:  10baseT/Half 10baseT/Full                             100baseT/Half 100baseT/Full     Advertised pause frame use: Symmetric Receive-only    Advertised auto-negotiation: Yes    Link partner advertised link modes:  10baseT/Half 10baseT/Full                                         100baseT/Half 100baseT/Full    Link partner advertised pause frame use: Symmetric Receive-only    Link partner advertised auto-negotiation: Yes    Speed: 100Mb/s    Duplex: Full    Port: MII    PHYAD: 0    Transceiver: internal    Auto-negotiation: on    Supports Wake-on: pumbg    Wake-on: g    Current message level: 0x00000033 (51)                 drv probe ifdown ifup    Link detected: yes  

iptables -L -v -n

ts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 6546 packets, 628K bytes) pkts bytes target prot opt in out source destination


ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: p33p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 2c:27:d7:33:6d:9e brd ff:ff:ff:ff:ff:ff  

3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000

link/ether d0:df:9a:78:27:90 brd ff:ff:ff:ff:ff:ff  

iplink -s link show p33p1

: p33p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 2c:27:d7:33:6d:9e brd ff:ff:ff:ff:ff:ff    RX: bytes  packets  errors  dropped overrun mcast    3267409    43526    0       0       0       0       TX: bytes  packets  errors  dropped carrier collsns     1007158    10078    0       0       0       0        

ip neighbor show

e80::5a8d:9ff:fee2:c392 dev p33p1 lladdr 58:8d:09:e2:c3:92 router STALE

192.1.1.250 dev p33p1 lladdr 54:af:97:2d:e0:85 REACHABLE

192.1.1.29 dev p33p1 lladdr 88:9f:fa:5b:1d:4b STALE

192.1.1.244 dev p33p1 lladdr 54:44:a3:22:13:4e REACHABLE

192.1.1.12 dev p33p1 lladdr d8:cb:8a:c1:53:73 DELAY

192.1.1.200 dev p33p1 lladdr 00:21:b9:00:f4:f0 REACHABLE

192.1.1.242 dev p33p1 lladdr 00:11:d9:3d:f3:7f REACHABLE

192.1.1.97 dev p33p1 lladdr d8:de:3a:1e:4d:c9 STALE

Note that this says that 192.1.1.250 is REACHABLE.


It appears to me that traffic from the server to the gateway never goes out the ethernet interface despite the routing. Wireshark shows no traffic from the server when I attempt to ping the gateway from the server. Traffic to every other localnet node goes out the ethernet interface.

The above netstat -nr shows that the default gateway is 192.1.1.250 (that is the address of my router). The ifconfig above shows that the Linux server is at 192.1.1.8 (same subnet as the default gateway).

I changed one thing that may have caused this issue. I changed the IP address of the server from hardcoded to DHCP. But I tried changing it back to hardcoded with no success.

Please help. I am tearing my hair out!

Alsa vdownmix config - how to use with different devices?

Posted: 26 Jun 2022 07:12 PM PDT

I am trying to configure ALSA to downmix 5.1 surround audio to 2.0 stereo. There is an ALSA output plugin vdownmix that seems to do exactly this, but I can only seem to use it with my onboard audio instead of my USB soundcard, despite the USB soundcard being set to default. The config in question is /usr/share/alsa/alsa.conf.d/60-vdownmix.conf (from Debian bullseye libasound2-plugins):

    @args [ SLAVE CHANNELS DELAY ]      @args.SLAVE {          type string          default "plug:hw"      }      @args.CHANNELS {          type integer          default 6      }      @args.DELAY {          type integer          default 0      }      type vdownmix      slave.pcm $SLAVE      hint {          show {              @func refer              name defaults.namehint.basic          }                  description "Plugin for channel downmix (stereo) with a simple spacialization"      }  }  

The issue seems to be the line default "plug:hw", this seems to only let me use my onboard sound instead of USB. What is the proper syntax to tell that to use my USB sound card, or better yet, is it possible for me to modify that to let me use an arbitrary slave device?

systemd user unit error on boot : Failed to add dependency ignoring: Invalid argument

Posted: 26 Jun 2022 04:00 PM PDT

Arch 5.18/ MATE Desktop

I have a user service that sets up values for my panel

[Unit]  Description=Set values for panel widgets  After=mnt-ram  After=sys-subsystem-net-devices-eno1.device    [Service]  ExecStart=/home/stephen/bin/panel-setup.sh   Type=oneshot  RemainAfterExit=True    [Install]  WantedBy=default.target  

Both mnt-ram and sys-subsystem-net-devices-enp0s8.device show up as active for systemctl --user list-units.

At boot the journal reports

 systemd[669]: /home/stephen/.config/systemd/user/panel-setup.service:3: Failed to add dependency on mnt-ram, ignoring: Invalid argument     

However after the desktop loads I can issue without error and with expected effect: systemctl user restart panel-setup

Can't Use Full Resolution for External Monitor Sway

Posted: 26 Jun 2022 03:35 PM PDT

So, some system specs before we start:

Model: ThinkPad X1 Carbon Gen 9  RAM: 32GB  CPU: i7-1165G7  GPU: TigerLake-LO GT2 (Iris Xe)  Kernel: 5.18.6-arch1-1  WM: sway 1.7  Wayland: 1.20.0-2  

Here's the situation, I've got an external monitor with a native resolution of 3840x2160 at up to 144Hz. This is all detected correctly when I plug it in over DP/USB-C and run swaymsg -t get_outputs.

However, the external monitor then flickers and cuts to black whenever something changes on-screen before restoring itself.

I can get it to run stably at 1440p which would indicate some sort of bandwidth issue. However, I'm using exactly the same cable in exactly the same port that I run this monitor on at full resolution @144Hz in Windows, where it works absolutely flawlessly.

To further complicate this, I can actually get it to run at 2560x1440@144Hz so if we do a back-of-the-napkin calculation:

2560x1440@144Hz ---- 530841600 pixels/s  3840x2160@ 60Hz ---- 497664000 pixels/s  

Which would make the bandwidth issues hypothesis look somewhat less likely.

Having only recently (quite literally yesterday) decided to give Wayland a go, I wouldn't even know where to begin debugging this issue. Any ideas?

ufw "command not found"- but as the root user!

Posted: 26 Jun 2022 02:48 PM PDT

My Debian 11 VPS is running now for about 2 weeks- and today I just wanted to analyse, why my traffic is ~70GiB (counted by bashtop). So somewhere in the net I read about nethogs, that this could help. So I installed it with my non-root, but sudo-grouped user.

sudo apt install nethogs

I couldn't run it.

So I switched to the root user with

su

It still didn't worked. So I just wanted to check the most important thing, the ufw.

ufw status

Output: command not found


So ...

  1. ufw is installed
  2. but it works if I execute it from the full path with

/usr/sbin/ufw status

But I want to know also, if the incoming traffic is really blocked by default until now:

/usr/sbin/ufw status verbose

Output: ERROR: problem running sysctl

Something is really fcked up ... I don't know why? The last thing I did 2 weeks ago was installing kuma-uptime with the kuma_install.sh After that I didn't tried ufw. So. my $PATH seems to be not working correct- even as the root user.

I'm not an expert, but this is how my bashrc file looks like (from the root user):

# ~/.bashrc: executed by bash(1) for non-login shells.    # Note: PS1 and umask are already set in /etc/profile. You should not  # need this unless you want different defaults for root.  # PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '  # umask 022    # You may uncomment the following lines if you want `ls' to be colorized:  # export LS_OPTIONS='--color=auto'  # eval "`dircolors`"  # alias ls='ls $LS_OPTIONS'  # alias ll='ls $LS_OPTIONS -l'  # alias l='ls $LS_OPTIONS -lA'  #  # Some more alias to avoid making mistakes:  # alias rm='rm -i'  # alias cp='cp -i'  # alias mv='mv -i'  

and the .profile file:

# ~/.profile: executed by Bourne-compatible login shells.    if [ "$BASH" ]; then    if [ -f ~/.bashrc ]; then      . ~/.bashrc    fi  fi    mesg n 2> /dev/null || true  

and this is the path to ufw:

ufw: /usr/sbin/ufw /etc/ufw /lib/ufw /usr/share/ufw /usr/share/man/man8/ufw.8.gz  

and that is my echo $PATH:

/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/snap/bin  

Hope this is enought and someone could help me please, you're my last hope. Otherwise I have to switch to ubuntu, maybe ubuntu behaves better. I hope, I can solve the problem with your help. It would be great, if someone could tell more or less what I have exactly to do now.

Is this a safe way to migrate my data to a new computer?

Posted: 26 Jun 2022 02:45 PM PDT

I recently bought a new laptop and I would like to migrate to it with as little hassle as possible. I don't want to do a fresh install since I have made various tweaks to my current setup for things like automounting remote drives from my NAS, configuring networking etc. that I would prefer not to have to redo.

My current thinking is that I can just dump the contents of my hard drive to a file, then cat that file onto the new drive. The general idea will be:

  1. On the old computer, cat the drive into a file on an external USB disk and (as root):

    # cat /dev/sda > /mnt/externalUsb/sda.img  
  2. I then boot into a live system on the new computer, connect the external drive and (as root):

    # cat /mnt/externalUsb/sda.img | sudo tee /dev/sda  
  3. Shut down the live session, reboot the machine and, I hope, find myself in a working system which is a perfect clone of my old machine.

Some relevant notes:

  • The hardware of the old and new machines is relatively similar as I will be moving from a ThinkPad T460P to a ThinkPad P14s Gen 2.
  • The new machine has a 1TB hard drive but the old one is only 512G.
  • I am using Arch, dual booted with a Windows 10. I am not particularly bothered about keeping the Windows install.

My current machine's disk setup:

$ sudo parted -l  Model: ATA SAMSUNG MZ7LN512 (scsi)  Disk /dev/sda: 512GB  Sector size (logical/physical): 512B/512B  Partition Table: gpt  Disk Flags:     Number  Start   End     Size    File system     Name                          Flags   1      1049kB  274MB   273MB   fat32           EFI system partition          boot, hidden, esp   2      274MB   290MB   16.8MB                  Microsoft reserved partition  msftres   3      290MB   86.4GB  86.1GB  ntfs            Basic data partition          msftdata   5      86.4GB  136GB   50.0GB  ext4   6      136GB   437GB   301GB   ext4   9      437GB   485GB   47.3GB  ntfs                                          msftdata   8      485GB   495GB   10.5GB  ext4   7      495GB   511GB   16.1GB  linux-swap(v1)                                swap   4      511GB   512GB   1049MB  ntfs            Basic data partition          hidden, diag    

I am expecting the kernel to detect the new/different hardware the first time it boots and sort it out for me automatically. Am I missing something obvious here? Any specific problems I might encounter? The new drive is larger, so that shouldn't be a problem, right? I have an ecryptfs-encrypted directory (two of them, actually), am I right in assuming that won't be an issue? Will I need to do anything special to handle the EFI system partition perhaps?

CentOS 7: yum can't find valid baseurl

Posted: 26 Jun 2022 08:12 PM PDT

In trying to install the latest version of R on CentOS 7 through EPEL, I ran the following command:

yum --enablerepo=epel clean metadata

After this, my yum install refuses to work and gives me the following output:

Failed to set locale, defaulting to C  Loaded plugins: fastestmirror, langpacks  Loading mirror speeds from cached hostfile       One of the configured repositories failed (Unknown),   and yum doesn't have enough cached data to continue. At this point the only   safe thing yum can do is fail. There are a few ways to work "fix" this:         1. Contact the upstream for the repository and get them to fix the problem.         2. Reconfigure the baseurl/etc. for the repository, to point to a working          upstream. This is most often useful if you are using a newer          distribution release than is supported by the repository (and the          packages for the previous distribution release still work).         3. Run the command with the repository temporarily disabled              yum --disablerepo=<repoid> ...         4. Disable the repository permanently, so yum won't use it by default. Yum          will then just ignore the repository until you permanently enable it          again or use --enablerepo for temporary usage:                yum-config-manager --disable <repoid>          or              subscription-manager repos --disable=<repoid>         5. Configure the failing repository to be skipped, if it is unavailable.          Note that yum will try to contact the repo. when it runs most commands,          so will have to try and fail each time (and thus. yum will be be much          slower). If it is a very temporary problem though, this is often a nice          compromise:                yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true    Cannot find a valid baseurl for repo: centosplus/7/x86_64  

I've tried the various solutions available to tackle this problem, but nothing works. I feel like I probably messed something up when I cleaned metadata, but I have no idea how to fix this. (I am unfamiliar with the inner workings of Linux OS).

When I do cat /etc/yum.repos.d/CentOS-Base.repo, I get:

[root@localhost ~]# cat /etc/yum.repos.d/CentOS-Base.repo  # CentOS-Base.repo  #  # The mirror system uses the connecting IP address of the client and the  # update status of each mirror to pick mirrors that are updated to and  # geographically close to the client.  You should use this for CentOS updates  # unless you are manually picking other mirrors.  #  # If the mirrorlist= does not work for you, as a fall back you can try the   # remarked out baseurl= line instead.  #  #    [base]  name=CentOS-$releasever - Base  mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra  baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/  gpgcheck=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7    #released updates   [updates]  name=CentOS-$releasever - Updates  mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra  #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/  gpgcheck=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7    #additional packages that may be useful  [extras]  name=CentOS-$releasever - Extras  mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra  #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/  gpgcheck=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7    #additional packages that extend functionality of existing packages  [centosplus]  name=CentOS-$releasever - Plus  mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra  #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/  gpgcheck=1  enabled=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7  

create first come first serve with bash script

Posted: 26 Jun 2022 08:13 PM PDT

#/bin/bash

sort(){      for ((i = 0; i<$n; i++))       do                      for((j = 0; j<`expr $n - $i - 1`; j++))           do                      if [ ${arrival_time[j]} -gt ${arrival_time[$((j+1))]} ]           then              # swap               temp=${arrival_time[j]}               arrival_time[$j]=${arrival_time[$((j+1))]}                 arrival_time[$((j+1))]=$temp               temp=${burst_time[j]}               burst_time[$j]=${burst_time[$((j+1))]}                 burst_time[$((j+1))]=$temp              temp=${pid[j]}               pid[$j]=${pid[$((j+1))]}                 pid[$((j+1))]=$temp          elif [ ${arrival_time[j]} -eq ${arrival_time[$((j+1))]} ]          then              if [ ${pid[j]} -eq ${pid[$((j+1))]} ]              then                  temp=${arrival_time[j]}                   arrival_time[$j]=${arrival_time[$((j+1))]}                     arrival_time[$((j+1))]=$temp                   temp=${burst_time[j]}                   burst_time[$j]=${burst_time[$((j+1))]}                     burst_time[$((j+1))]=$temp                  temp=${pid[j]}                   pid[$j]=${pid[$((j+1))]}                     pid[$((j+1))]=$temp              fi          fi          done      done  }  
border(){      z=121      for ((i=0; i<$z; i++))      do      echo -n "-"      done      echo ""  }  
findWaitingTime(){      service_time[0]=0      waiting_time[0]=0      for ((i=1; i<$n; i++))      do          z=1          y=`expr $i - $z`          service_time[$i]=`expr ${service_time[$y]} + ${burst_time[$y]} `          waiting_time[$i]=`expr ${service_time[$i]} - ${arrival_time[$i]}`          if [ ${waiting_time[$i]} -lt 0 ]          then              waiting_time[$i]=0          fi      done  }  
findTurnAroundTime(){      for ((i=0; i<$n; i++))      do          tat[$i]=`expr ${waiting_time[$i]} + ${burst_time[$i]}`      done  }  
findAverageTime(){      sort      findWaitingTime      findTurnAroundTime      total_wt=0      total_tat=0      border      printf "|%-18s|%-20s|%-18s|%-20s|%-18s|%-20s|\n" "Process Id" "Burst time" "Arrival time" "Waiting time" "Turn around time" "Completion time"      border      for ((i=0; i<$n; i++))      do          total_wt=`expr $total_wt + ${waiting_time[$i]}`          total_tat=`expr ${tat[$i]} + $total_tat`          completion_time=`expr ${arrival_time[$i]} + ${tat[$i]}`          printf "|%-18s|%-20s|%-18s|%-20s|%-18s|%-20s|\n" ${pid[$i]} ${burst_time[$i]} ${arrival_time[$i]} ${waiting_time[$i]} ${tat[$i]} $completion_time          #echo "${burst_time[$i]}     ${arrival_time[$i]}     ${waiting_time[$i]}       ${tat[$i]}         $completion_time"      done      border      #avgwt=`echo "scale=3; $total_wt / $n" | bc`      echo -n "Average waiting time ="      printf %.3f\\n "$(($total_wt / $n))"      #avgtat=`echo "scale=3; $total_tat / $n" | bc`      echo -n "Average turn around time ="      printf %.3f\\n "$(($total_tat / $n))"            for ((i=0; i<8*n+n+1; i++))      do          echo -n "-"          done          echo ""        for ((i=0; i<$n; i++))      do          echo -n "|   "          echo -n "P${pid[$i]}"          echo -n "   "      done      echo "|"      for ((i=0; i<8*n+n+1; i++))      do          echo -n "-"          done          echo ""      echo -n "0  "      for ((i=0; i<$n; i++))      do          echo -n "`expr ${arrival_time[$i]} + ${tat[$i]}`"          echo -n "      "      done      echo ""  }  
n=$(sed -e '1~2d' fcfs1.txt |awk '{ for (i=1; i<=NF; i++) RtoC[i]= (i in RtoC?RtoC[i] OFS :"") $i; } END{ for (i=1; i<=NF; i++) print RtoC[i] }'| awk '{print $1}' |wc -l)  for ((i=0; i<$n; i++))  do  pid[$i]=$(cat fcfs.txt | awk '{print $1}')  arrival_time[$i]=$(cat fcfs.txt | awk '{print $2}')  burst_time[$i]=$(cat fcfs.txt | awk '{print $3}')  done  findAverageTime  
fcfs.txt content like that  
1  15  10  2   17  12  
if input file has only one process script working perfect, if more that one gives an error  
output when only one process in the input file  
-------------------------------------------------------------------------------------------------------------------------  |Process Id        |Burst time          |Arrival time      |Waiting time        |Turn around time  |Completion time     |  -------------------------------------------------------------------------------------------------------------------------  |1                 |5                   |10                |0                   |5                 |15                  |  -------------------------------------------------------------------------------------------------------------------------  Average waiting time =0.000  Average turn around time =5.000  ----------  |   P1   |  ----------  0   15       

Supress spammy discord log

Posted: 26 Jun 2022 02:46 PM PDT

Everytime I open discord (installed using deb package), my log always spammed with these kind of message every second:

Jun 25 20:14:20 pop2104 gnome-shell[102661]: [2022-06-25 20:14:20.833] [102901] (device_info_linux.cc:45): NumberOfDevices  Jun 25 20:14:20 pop2104 gnome-shell[102661]: [2022-06-25 20:14:20.950] [102901] (device_info_linux.cc:45): NumberOfDevices  Jun 25 20:14:20 pop2104 gnome-shell[102661]: [2022-06-25 20:14:20.950] [102901] (device_info_linux.cc:78): GetDeviceName  

note: previously was using flatpak version of discord, and it has similar problem

How to suppress this?

Print occurence count of "keys" and sum of the associated "values" in 3-column data file

Posted: 26 Jun 2022 07:31 PM PDT

I'm reading a Redis dump file using shell.

There are 3 main columns in the dump file as below.

Text:tags:name    682651    520  Text:tags:age     78262     450  Value:cache       77272     672  Value:cache:name  76258     872  New:specific      77628     762  New:test          76628     8622  

Expected output:

Key     Count     Sum  Text:*  2         970  Value:* 2         1544  New:*   2         9384  

Looking to get the expected above as columns can be checked based on substrings may be staring/middle/ending with strings (keys).

Effective ACL permissions changing permissions

Posted: 26 Jun 2022 05:03 PM PDT

From a bash shell script, I am creating a folder and storing the mysqldump there. I am sure that there is no command related to permissions in my script. To allow an other user to access these files, I have used ACL, but when he tried to access the file, he got permission denied issue, and issue is with effective permissions of ACL.

The owner of the directory is ola and new user who is trying to access the folder is uber and folder is gettaxi

Permissions of Parent directory

[/omega/olabooktmp]# getfacl .  # file: .  # owner: ola  # group: ola  user::rwx  user:uber:rwx  group::r-x  mask::rwx  other::r-x  default:user::rwx  default:user:uber:rwx  default:group::r-x  default:mask::rwx  default:other::r-x  

Permissions of Child directory

[/omega/olabooktemp]# getfacl gettaxi/  # file: gettaxi/  # owner: ola  # group: ola  user::rwx  user:uber:rwx       #effective:---  group::r-x          #effective:---  mask::---  other::---  default:user::rwx  default:user:uber:rwx  default:group::r-x  default:mask::rwx  default:other::r-x  

I see like for new directory gettaxi mask permissions are mask::---, so I think this is causing issue, but I am unable to understand completely and how to solve this issue.

Any suggestions greatly appreicated.

Thank you.

What does the ntp option "restrict default nopeer" do?

Posted: 26 Jun 2022 08:03 PM PDT

NTP version installed: ntp-4.2.6p5-5

I'm trying to understand the usage and meaning of the ntp restrict along with restrict default nopeer

Quoting the NTP documentation:

nopeer: Deny packets that might mobilize an association unless authenticated. This includes broadcast, symmetric-active and manycast server packets when a configured association does not exist. It also includes pool associations, so if you want to use servers from a pool directive and also want to use nopeer by default, you'll want a "restrict source ..." line as well that does not include the nopeer directive. Note that this flag does not apply to packets that do not attempt to mobilize an association.

Does it mean, when we are using restrict default nopeer, we can't associate peers without authentication. (without using keys) ?

Consider the following scenario:

Server config: ip- 10.12.12.12

[root@sdp_1 ~]# cat /etc/ntp.conf    server 10.12.10.53      #restrict default kod nomodify nopeer noquery notrap      #restrict -6 default kod nomodify nopeer noquery notrap      #restrict 127.0.0.1      #restrict -6 ::1      restrict default nopeer      keys /etc/ntp/keys  

Peer config: ip- 10.12.12.11

[root@sdp_2 ~]# cat /etc/ntp.conf    #server 10.12.10.53  #restrict default kod nomodify nopeer noquery notrap  #restrict -6 default kod nomodify nopeer noquery notrap  #restrict 127.0.0.1    #restrict -6 ::1  restrict default nopeer  peer 10.12.12.12 minpoll 4  keys /etc/ntp/keys  

Still i can see PEER associations at 10.12.12.11 as below:

ntpq> associations    ind assid status  conf reach auth condition  last_event cnt  ===========================================================    1 48387  961a   yes   yes  none  sys.peer    sys_peer  1  ntpq>    [root@sdp_2 ~]# ntpq -np       remote           refid      st t when poll reach   delay   offset  jitter  ==============================================================================  *10.12.12.12    10.12.10.53     5 u   13   16  377    0.211    8.953   0.842  

Are my assumptions are right ?

How to install WiFi driver in Debian 9?

Posted: 26 Jun 2022 06:26 PM PDT

I installed Debian 9.

Now I need to install WiFi card driver. But, I couldn't find a working one on my system.

These are my devices listed by command lspci -nn:

00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09)  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09)  00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09)  00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04)  00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 05)  00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 05)  00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b5)  00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b5)  00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b5)  00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b5)  00:1c.5 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 [8086:1c1a] (rev b5)  00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 05)  00:1f.0 ISA bridge [0601]: Intel Corporation HM67 Express Chipset Family LPC Controller [8086:1c4b] (rev 05)  00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 05)  00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 05)  01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF108M [GeForce GT 525M] [10de:0df5] (rev a1)  03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1030 [Rainbow Peak] [8086:008a] (rev 34)  04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04)  06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06)  

Question: how to install wifi drivers?

Kali linux fail to boot after apt-get dist-upgrade

Posted: 26 Jun 2022 02:05 PM PDT

After running

apt-get dist-upgrade  

command on my kali linux rolling edition, the OS fails to boot everytime and stuck at this messages.

loading, please wait...  'disk/by-uuid/1aacd4f4-do73-4e39-b456-fc7d3f78662c": invaild path for logical volume.  fsck from util-linux 2.29.1  /dev/sda1: clean, 3 74044/30015488 files, 4391943/120057088 blocks  [ 9.507788] kvm: disabled by bios  [ 9.550513] kvm: disabled by bios  [ 9.598133] kvm: disabled by bios  [ 9.636218] kvm: disabled by bios  [FAILED] failed to start Open Vulnerability Assessment System Scanner Daemon. see 'systemct1 status openvas-scanner.service' for details.  [OK] started WPA supplicant.  [FAILED] Failed to start Open Vulnerability Assessment System Manager Daemon. see 'systemct1 startus openvas-manager.service' for details.  [OK] Reached target Multi-User System.  [OK] Reached target Graphical Interface.       Starting Update UTMP about System Runlevel Changes...  [OK] Started Update UTMP about System Renlevel Changes.  

I noticed some changes at the grub menu: the wallpaper disappeared (Kali logo)

I tried booting in recovery mode running

apt-get clean && apt-get update  

but that fails with some error messages, any help?

Apache Multiple Domains and Multiple SSL to same IP and folder

Posted: 26 Jun 2022 06:06 PM PDT

I am building a system and I need to use the same IP and same folder for a site with multiple domains and a seperate SSL certificate for each one. Also I do not want them to redirect or forward, because the site can handle the different domains on its own.

so like

site1.example.com 192.168.0.2:443 /var/www/html    site2.example.com 192.168.0.2:443 /var/www/html    site3.example.com 192.168.0.2:443 /var/www/html    site4.example.com 192.168.0.2:443 /var/www/html  

I am using Ubuntu 14.04 with Apache2 I really have no idea on how the hosts file should look for this, can someone show me an example?

ip domain name/FQDN using dig

Posted: 26 Jun 2022 07:07 PM PDT

When I do a dig on a hostname, in our network, it doesn't not give me the IP address, but when I add the domain name it does work or when I do a dig +search

Why is the domain name a prerequisite for DIG to resolve the hostname to IP?

Linux does not proxy-arp for me, despite the documentation suggesting that it does

Posted: 26 Jun 2022 03:00 PM PDT

I am working on a PDP-10 emulator (see https://github.com/Rhialto/klh10 ). The operating system installed inside it may want to communicate with the outside world via IPv4 (which was just gaining use when those machines were popular). For this purpose, the emulator opens a packet filter (or alternatively, a tap device) on the host machine.

Suppose you're on a local network, 10.0.0.x. The emulated OS may use an IPv4 address of, say, 10.0.0.51.

In order for other hosts on the same network to be able to communicate with the virtual host, they send ARP requests for 10.0.0.51. I want the Unix kernel to answer these requests for me with a sensible ethernet address (which is called proxy-ARP).

To make the Unix do this, the emulator does (the equivalent of) "arp -s 10.0.0.51 01:23:45:56:78:9A pub", where the ethernet address of the host OS is used.

On other Unixen than Linux, this has the desired effect. If I attempt to telnet, or ping, to 10.0.0.51 I see the ARP requests go out for the emulated host, and replies come back:

23:13:42.391941 ARP, Request who-has 10.0.0.51 tell 10.0.0.16, length 46  23:13:42.391954 ARP, Reply 10.0.0.51 is-at f6:2b:a4:a0:76:b0 (oui Unknown), length 28  

However, on Linux (I have Ubuntu 15.10), this does not work. The entry does show up in the ARP table with "arp -a", although in a weird way:

? (10.0.0.51) at <from_interface> PERM PUB on eth0  

I have tried a few seemingly related sysctls to try to enable the proxy ARPing, such as

net.ipv4.conf.all.proxy_arp = 1  net.ipv4.conf.default.proxy_arp = 1  net.ipv4.conf.eth0.proxy_arp = 1  

and even

net.ipv4.ip_forward = 1  

but none of this helps. What am I missing?

I can test this just using the arp command on the Linux box, a tcpdump for observation, and another box to initiate ARP requests. When I get it to work, I can install any necessary extra setup steps into the emulator.

EDIT: here is a simple scenario to try, if you have 2 machines on the same network, one of which is Linux:

  1. On the Linux box, do sudo arp -s 10.0.0.51 01:23:45:56:78:9A pub. You may need to substitute a different IP address if you're using a different local network; the address should not exist but fit inside your network. 192.168.0.51 could be a possibility. Also, I noticed that Ubuntu refused to accept random ethernet addresses, so you may need to substitute an ethernet address of the eth0 interface.
  2. On the same or other box, sudo tcpdump -i eth0 arp. This will show all ARP requests and replies on the network.
  3. On some other box, which may be a different operating system altogether, do ping 10.0.0.51 (or the address you used, of course). Expected result: the running tcpdump command should show ARP Requests and ARP Replies. If it doesn't, I would like to know what setting is needed to make it happen. And if this is Ubuntu-specific perhaps. The ping will ultimately fail (no host by that IP address is available) but that is immaterial in this test. If it says ping: sendto: Host is down it means it knows there is no ARP Reply.

Exundelete can't restore the file

Posted: 26 Jun 2022 04:06 PM PDT

I'm trying to restore 2 important tar.gz files I know their directory but extundelete not restoring them although it's giving me the inode number.

Loading filesystem metadata ... 2127 groups loaded.  Loading journal descriptors ... 26473 descriptors loaded.  Unable to restore inode 3538958 (file.tar.gz): No data found.  Unable to restore file file.tar.gz  extundelete: Operation not permitted when trying to examine filesystem  extundelete: Operation not permitted when trying to examine filesystem  

And

Loading filesystem metadata ... 2127 groups loaded.  Loading journal descriptors ... 26473 descriptors loaded.  Unable to restore inode 3538958 (file.tar.gz): No data found.  Unable to restore file file2.tar.gz  extundelete: Operation not permitted when trying to examine filesystem  extundelete: Operation not permitted when trying to examine filesystem  

Is there a way to repair the inode or get the file?

Do you advice to use other recovering software for CentOS 6 64bit

Fastest way of working out uncompressed size of large GZIPPED file

Posted: 26 Jun 2022 03:42 PM PDT

Once a file is gzipped, is there a way of quickly querying it to say what the uncompressed file size is (without decompressing it), especially in cases where the uncompressed file is > 4GB in size.

According to the RFC https://www.rfc-editor.org/rfc/rfc1952#page-5 you can query the last 4 bytes of the file, but if the uncompressed file was > 4GB then the value just represents the uncompressed value modulo 2^32

This value can also be retrieved by running gunzip -l foo.gz, however the "uncompressed" column just contains uncompressed value modulo 2^32 again, presumably as it's reading the footer as described above.

I was just wondering if there is a way of getting the uncompressed file size without having to decompress it first, this would be especially useful in the case where gzipped files contain 50GB+ of data and would take a while to decompress using methods like gzcat foo.gz | wc -c


EDIT: The 4GB limitation is openly acknowledged in the man page of the gzip utility included with OSX (Apple gzip 242)

  BUGS      According to RFC 1952, the recorded file size is stored in a 32-bit      integer, therefore, it can not represent files larger than 4GB. This      limitation also applies to -l option of gzip utility.  

Cronjob to run script every 3 weeks on Wednesday

Posted: 26 Jun 2022 07:56 PM PDT

Is it possible to schedule a cron job that would run every three weeks on Wednesday (8AM) only? Or, if that is not possible, to run a job every 27 days or less but on Wednesday at 8AM.

How to check which GPU is active in Linux?

Posted: 26 Jun 2022 06:20 PM PDT

I have 2 GPU's in my netbook. How do I know which one I'm actually using at any given moment?

No comments:

Post a Comment