sftp with chroot and file creation problem Posted: 27 Aug 2021 09:51 AM PDT I configured second sshd systemd service using own sshd configuration with different tcp port configured in it and following configuration of sftp subsystem: Match Group sftponly ChrootDirectory /srv/%u AllowTcpForwarding no ForceCommand internal-sftp PubkeyAuthentication yes PasswordAuthentication no PermitTunnel no AllowAgentForwarding no X11Forwarding no The user I created in sftponly group is: uid=1001(sftpuser) gid=1001(sftponly) groups=1001(sftponly) The directory tree for chroot is: drwxr-xr-x 3 root root 22 Aug 27 15:43 /srv drwxr-xr-x 4 root root 34 Aug 27 18:27 /srv/sftpuser drwx------ 2 sftpuser sftponly 29 Aug 27 15:43 /srv/sftpuser/.ssh -rw-r--r-- 1 sftpuser sftponly 398 Aug 27 15:43 /srv/sftpuser/.ssh/authorized_keys I can successfully do sftp with the private key however I can't create any file in user's /srv/%u chroot directory: sftp> ls -al drwxr-xr-x 3 root root 18 Aug 27 16:38 . drwxr-xr-x 3 root root 18 Aug 27 16:38 .. drwx------ 2 sftpuser sftponly 29 Aug 27 13:43 .ssh sftp> mkdir one Couldn't create directory: Permission denied sftp> When I do chown sftpuser /srv/sftpuser and go back to active sftp session I can create files but when I logout I can't login sftp anymore, until I change back /srv/%u directory to be owned by root Connection to 192.168.1.110 closed by remote host. Connection closed I can of course create additional directory inside /srv/%u (/srv/sftpuser) owned by sftpuser but is this the only solution with chroot? Why user can't change/upload files directly to /srv/%u ? Additional question - how to prevent other users on the system from using this custom sshd configured for sftp only? When I set above Subsystem line in the custom sshd_config_sftponly two options: PubkeyAuthentication and PasswordAuthentication to no and do restart sshd daemon, regular system users can still login with their password using that custom sshd port. |
How do I set an interface to be used for DNS on Linux? Posted: 27 Aug 2021 09:39 AM PDT I have a Linux machine with 3 network interfaces, I noticed that when the main network interface (the one with the lowest metric) loses internet connection, all other network interfaces are not able to resolve DNS queries anymore. Once I manually increase the metric of the main interface so it's not the one with the lowest metric anymore, DNS starts working again for all other interfaces. It is my understanding that DNS is resolved from the "main" interface, the one with the lowest metric, is there a way to set the machine in a way that every interface will use itself to resolve DNS? |
/bin/bash /var/tmp/.system/[scan] Posted: 27 Aug 2021 09:25 AM PDT I noticed a process is using 15% from my CPU. Checking the process it is just showing [scan] If I try to kill this process, it will rerun itself again. When I check with ps -fp it is showing UID PID PPID C STIME TTY TIME CMD root 26702 1 15 16:03 ? 00:01:59 /bin/bash /var/tmp/.system/[scan] Any help please? |
What is dbus-launch? Posted: 27 Aug 2021 09:10 AM PDT I use Linux Mint. I switch from Cinnamon to i3wm a few weeks ago and I'm loving i3wm. Now, when I run gnome-disks on the terminal, it takes about 9 to 20 seconds to open gnome-disks. But when I run dbus-launch gnome-disks , it takes less than 2 seconds (sometimes it doesn't even take a second) to open gnome-disks. So, what's dbus-launch? And Why does it take so long to open some GNOME programs without dbus-launch on i3wm? Thank you! :) |
pgrep process count show extra count Posted: 27 Aug 2021 08:21 AM PDT I've a scirpt name server.sh #!/bin/bash process_count=$(ps aux | grep server.sh | grep -v grep | wc -l ) echo "total process running:" echo $process_count ... other script code when I run script I get output as ./server.sh total process running: 2 Why do I get process count as 2 instead of 1? I only have one script running and have also excluded grep process. Even using pgrep -f server.sh and excluding pgrep gives 2 as process count. |
Command line http analyzer Posted: 27 Aug 2021 08:14 AM PDT I would like to learn more about http(s) requests. Learning requires doing, not just reading. I tried inspecting tcpdump output when loading a simple page but the packets are too low-level and I am too-not- knowledgeable to understand them. Is there a similar command line tool which however spews http packets/headers? I guess wireshark can do that but being command line accessible would be so much easier. Target OS are Arch and Raspbian. |
What are Linux system libraries? Posted: 27 Aug 2021 07:57 AM PDT When I look around the web for information about the architecture of Linux, many articles say one component is what is known as "System Libraries" but I'm unable to find an explanation of what these are, what is their job in the system, where are they generally found and what are some examples of them? Thank you for any information |
How do i solve this error while trying to install Ubuntu on vagrant? Posted: 27 Aug 2021 07:39 AM PDT How do i solve the following error "the executable cmd.exe vagrant is trying to run was not found in the %path%" |
How can I get my script to work exactly like my terminal command? Posted: 27 Aug 2021 08:17 AM PDT tl;dr The command git fetch origin [branch-name] is not working when invoked within a script, but works when invoked from shell. Both script and terminal are working just fine on another box. The box is behind a corporate proxy. I assume there is some difference in the environment in the script compared to the shell. Question How can I get this command to run from within the script? Detailed Info Back Story Everything worked on both boxes until the repository was migrated to GitHub. The origin was updated accordingly on both systems. The development system worked immediately, while the staging system just won't work. Script This is how the part of the script looks it checks all repositories are reached before acutally starting and therefor stops here: check_git_connection_mapbender() { cd ${installation_folder}mapbender git fetch origin ${git_mapbender_repositoryBranch} &>/dev/null if [ $? -ne 0 ]; then echo "Fetching Mapbender repository failed!" echo "Abording update..." exit 1 fi set_git_filemode_config } The two echo lines are echoed out and the script exits. The variables are all set. What I've tried compare shells As I am suspecting some problem with environment variables I was trying to invoke the failing command git fetch origin [branch-name] from the terminal directly. The command worked in all the cases, when invoked from shell: me@box:/path/to/repo$ git fetch origin [branch-name] me@box:/path/to/repo$ sudo git fetch origin [branch-name] me@box:$ sudo -i cd /path/to/repo && git fetch origin [branch-name] me@box:$ sudo su - -> root@box: cd /path/to/repo && git fetch origin [branch-name] What does not seem to work is the following way: me@box:$ sudo -i -> root@box: cd /path/to/repo && git fetch origin [branch-name] <-- this does throw the error like in the script git config At first I assured the effective git config is the same on both systems. Since then I tried adding certain options on the staging system to fix the problem, but without success. I tried setting: url.http://.insteadof=https:// http.https://github.com.sslverify=false http.proxy https.proxy http.sslCert As it didn't help I removed it again. debugging connection I added export GIT_TRACE_CURL=true to the script so it looks like this: check_git_connection_mapbender() { cd ${installation_folder}mapbender # DEBUG export GIT_TRACE_CURL=true source /etc/environment echo $http_proxy echo $https_proxy # END DEBUG echo $git_mapbender_repositoryBranch git fetch origin ${git_mapbender_repositoryBranch} # &>/dev/null if [ $? -ne 0 ]; then echo "Fetching Mapbender repository failed!" echo "Abording update..." exit 1 fi set_git_filemode_config } ... and the output is [correct http_proxy environment variable] [correct https_proxy environment variable] [correct branch name] 16:52:17.600248 http.c:599 == Info: Couldn't find host github.com in the .netrc file; using defaults 16:52:17.603973 http.c:599 == Info: Trying 140.82.121.4... 16:52:17.604035 http.c:599 == Info: TCP_NODELAY set 16:52:17.605297 http.c:599 == Info: connect to 140.82.121.4 port 443 failed: Verbindungsaufbau abgelehnt 16:52:17.605372 http.c:599 == Info: Failed to connect to github.com port 443: Verbindungsaufbau abgelehnt 16:52:17.605386 http.c:599 == Info: Closing connection 0 fatal: unable to access 'https://github.com/LVGL-SL/mapbender-sl.git/': Failed to connect to github.com port 443: Verbindungsaufbau abgelehnt Fetching Mapbender repository failed! Abording update... proxy configuration I'm told that the systems should be subjected to the same firewall rule sets as there is one for the subnet. |
webmstrssh key authentication still is asking for password Posted: 27 Aug 2021 07:38 AM PDT Hi everybody I am process for creating a passwordless ssh connection between servers. I have created the key via keygen and have copied to the remote server via this command: ssh-copy-id -i id_rsa.pub webmstr@192.168.1.241 Local system -bash-4.2$ ls -la total 24 drwx------. 2 webmstr webmstr 108 Aug 27 09:25 . drwxr-xr-x. 4 webmstr webmstr 4096 Aug 26 15:26 .. -rw-------. 1 webmstr webmstr 1202 Aug 25 10:43 authorized_keys -rw-------. 1 webmstr webmstr 1679 Dec 15 2020 id_rsa -rw-r--r--. 1 webmstr webmstr 411 Dec 15 2020 id_rsa.pub -rw-r--r--. 1 webmstr webmstr 2609 Aug 25 13:19 known_hosts -rw-r--r--. 1 webmstr webmstr 1211 Jun 30 10:13 known_hosts_06292021 -bash-4.2$ ssh-copy-id -i id_rsa.pub webmstr@192.168.1.241 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys webmstr@192.168.1.241's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'webmstr@192.168.1.241'" and check to make sure that only the key(s) you wanted were added. -bash-4.2$ it asks for the password I entered but when I try to log in it still asks for password. I read about permissions and made sure the home dir is 700, .ssh is 700 and authorized keys is 600 but still get a password prompt: remote system(permissions): [webmstr@ups-rhelmxi-14 home]$ ls -ld webmstr/ drwx------. 7 webmstr webmstr 4096 Aug 26 17:00 webmstr/ [webmstr@ups-rhelmxi-14 home]$ cd webmstr/ [webmstr@ups-rhelmxi-14 ~]$ ls -lad .ssh drwx------. 2 webmstr webmstr 98 Aug 27 07:40 .ssh [webmstr@ups-rhelmxi-14 ~]$ cd .ssh [webmstr@ups-rhelmxi-14 .ssh]$ ls -la total 20 drwx------. 2 webmstr webmstr 98 Aug 27 07:40 . drwx------. 7 webmstr webmstr 4096 Aug 26 17:00 .. -rw-------. 1 webmstr webmstr 1803 Aug 27 09:25 authorized_keys -rw-------. 1 webmstr webmstr 3102 Aug 26 16:23 authorized_keys_old -rw-r--r--. 1 webmstr webmstr 349 Aug 27 07:43 known_hosts -rw-r--r--. 1 webmstr webmstr 348 Aug 26 17:00 known_hosts_old [webmstr@ups-rhelmxi-14 .ssh]$ I have tried multiple time to add the key. when I do a ssh connection with verbose from local system to remote system 192.168.1.241: -bash-4.2$ ssh webmstr@192.168.1.241 -vv OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 59: Applying options for * debug2: resolving "192.168.1.241" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to 192.168.1.241 [192.168.1.241] port 22. debug1: Connection established. debug1: identity file /home/webmstr/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_dsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_ecdsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/webmstr/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.4 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4 debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 192.168.1.241:22 as 'webmstr' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ssh-dss debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com,zlib debug2: compression stoc: none,zlib@openssh.com,zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com debug2: compression stoc: none,zlib@openssh.com debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: kex: curve25519-sha256 need=64 dh_need=64 debug1: kex: curve25519-sha256 need=64 dh_need=64 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ecdsa-sha2-nistp256 SHA256:HmLSkqaa9vDZQ85Yl/zv14Rk9/KVp/jsm4jiUMxUa2c debug1: Host '192.168.1.241' is known and matches the ECDSA host key. debug1: Found key in /home/webmstr/.ssh/known_hosts:4 debug2: set_newkeys: mode 1 debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey after 134217728 blocks debug2: key: /home/webmstr/.ssh/id_rsa (0x55f4c98aba00) debug2: key: /home/webmstr/.ssh/id_dsa ((nil)) debug2: key: /home/webmstr/.ssh/id_ecdsa ((nil)) debug2: key: /home/webmstr/.ssh/id_ed25519 ((nil)) debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512> debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug2: we did not send a packet, disable method debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:1001) debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:1001) debug2: we did not send a packet, disable method debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/webmstr/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/webmstr/.ssh/id_dsa debug1: Trying private key: /home/webmstr/.ssh/id_ecdsa debug1: Trying private key: /home/webmstr/.ssh/id_ed25519 debug2: we did not send a packet, disable method debug1: Next authentication method: password webmstr@192.168.1.241's password: I updated the permissions for the remote webmstr home directory: [webmstr@ups-rhelmxi-14 home]$ ls -la total 8 drwxr-xr-x. 4 root root 42 Apr 2 2020 . dr-xr-xr-x. 18 root root 239 Jan 6 2021 .. drwxr-xr-x. 5 administrator administrator 4096 Aug 13 16:06 administrator drwxr-x---. 7 webmstr webmstr 4096 Aug 27 10:27 webmstr [webmstr@ups-rhelmxi-14 home]$ when I try from source server: [dba@TS-MXIAppMon .ssh]$ ssh 'webmstr@192.168.1.241' webmstr@192.168.1.241's password: |
Copy files to a particular remote server from a list of servers Posted: 27 Aug 2021 07:43 AM PDT I have 3 remote servers to which I need to copy files from the source server rather than having to run same command multiples times for each server, can it be possible to select one server from the list and then transfer files to that server? My aim here is to avoid writing same piece of code for each server individually. Eg: A = source server B = remote server 1 C = remote server 2 D = remote server 3 (there may be more remote servers in future) Enter which server you want the files to be copied to (B/C/D): if I choose B on command line, following command gets executed: scp xyz.txt user@remoteserver2:/home scp jkl.txt user@remoteserver2:/home Similarly if other options are chosen, files should get copied to that server. Here's the code that i have for copying files on 1 server. #!/bin/sh today=`date '+%Y%m%d'`; min_date=`date -d "$today -14days" +%Y%m%d` max_date=`date -d "$today -1days" +%Y%m%d` read -p "Enter the date you want input files for [yyyymmdd]: " user_date udate=$user_date if [[ $user_date -ge $min_date && $user_date -lt $today ]] then ssh user@server2 mkdir -p /data/${udate}_inputfiles/{f1,f2,f3,f4,f5,f6} echo "Starting to copy files" cd /homepath1 scp *${udate}* user@server2:/data/${udate}_inputfiles/f1 scp *${udate}* user@server2:/data/${udate}_inputfiles/f2 scp *${udate}* user@server2:/data/${udate}_inputfiles/f3 scp *${udate}* user@server2:/data/${udate}_inputfiles/f4 scp *${udate}* user@server2:/data/${udate}_inputfiles/f5 scp *${udate}* user@server2:/data/${udate}_inputfiles/f6 else echo "Entered date is invalid: Please specify date between $min_date and $max_date" fi |
Systemd service does not start (WantedBy=multi-user.target) Posted: 27 Aug 2021 06:55 AM PDT OS: Ubuntu 20.04.3 $ \cat /home/nikhil/.config/systemd/user/Festival.service [Unit] Description=Festival Service [Service] ExecStart=/usr/bin/festival --server Restart=on-failure RestartSec=10 SyslogIdentifier=FestivalService [Install] WantedBy=multi-user.target Description I did systemctl --user enable Festival.service , rebooted my system. But the festival server does not start. Only when I do manually systemctl --user start Festival.service , it starts. Issue Could you please tell me, why user service does not work with multi-user.target , which is suppose to work on every boot? Reference |
amdgpu: Unsupported power mode 0 on RENOIR Posted: 27 Aug 2021 10:01 AM PDT I'm having trouble with my Lenovo T14 (AMD) laptop with a Ryzen 7 running Kubuntu (20.04.3 LTS). Until yesterday everything worked fine. However, yesterday suddenly the laptop froze and I had to turn it off forcefully. When rebooting I got this message on a black screen: [ 3.215831] amdgpu 0000:07:00.0: amdgpu: Unsupported power mode 0 on RENOIR You are in emergency mode. After logging in, type "journalctl -xb" to view system log, "Systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode. Drücken Sie die Eingabetaste für Wartungsarbeiten (oder drücken Sie Ctrl+D um fortzufahren): Booting into default does not work, I end up at the exact same screen within a few seconds. Rebooting resultet in being able to choose OS (since I have only Kubuntu, that does not matter) and then I end up at the same place. According to https://askubuntu.com/questions/1355957/amdgpu-unsupported-power-profile-mode-0 this is a kernel problem with the 5.11.0 kernel. Mine is 5.11.0-27, so I tried switching to 5.11.0-25, 5.10.0-1044 and even 5.8.0-63: With 5.11.0-25 and 5.10.0-1044 I have the same result. With 5.8.0-63 I receive a different error message (something about a link not found) but still on the same screen. Booting into recovery mode doesn't really help. It then shows me a mixed screen with the recovery mode menu overlaid with lines of the 'emergency mode' screen. When typing it seems to react mostly as if I were in the 'emergency mode' screen, but the recovery mode screen also reacts at times... Booting from a life USB stick (around 6 months old) has worked so far, however, I would not know what to do with that except for reinstalling my system. Which I would like to avoid for now. And if it's a kernel problem, it probably would not help either, would it? Has anyone an idea what exactly the problem could be and how to resolve that? Any help is much appreciated. BTW: I tried the workaround suggested in the link above, but it didn't change anything. I also realise that shortly before the freeze, I couldn't access some folders on an external drive and firefox had crashed several times for no obvious reason. |
Assign sed command correctly Posted: 27 Aug 2021 08:19 AM PDT I am trying to assign the result of a sed command to a variable in bash, but I am unable to escape everything corrrectly (probably just due to my lack of knowledge in bash), I have tried: hash_in_podfile=$( sed -rn 's/^ *pod [\'\"]XXX["\'],.*:commit *=> *["\']([^\'"]*)["\'].*$/\1/p' ${PODS_PODFILE_DIR_PATH}/Podfile ) but I am getting bash_playground.sh: line 9: unexpected EOF while looking for matching `'' UPDATED SCRIPT This is the script I am using updated with the code from the answer. Only the path and the comment have changed: #!\bin\sh PODS_PODFILE_DIR_PATH='/Users/path/to/file' # just a comment hash_in_podfile=$(sed -rnf - <<\! -- "${PODS_PODFILE_DIR_PATH}/Podfile" s/^ *pod ['"]XXX["'],.*:commit *=> *["']([^'"]*)["'].*$/\1/p ! ) echo $hash_in_podfile executed with sh script_name.sh sh --version yields: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20) Copyright (C) 2007 Free Software Foundation, Inc. On execution I get: script_name.sh: line 6: unexpected EOF while looking for matching `"' script_name.sh: line 10: syntax error: unexpected end of file |
How to read and consume the input of /dev/input/ Posted: 27 Aug 2021 08:31 AM PDT I just started working with Ubuntu/Linux so my knowledge is limited. My idea was to use a barcode scanner in my c++ program as an input device. This works perfect when it comes to reading. I just open the file which represents my barcode scanner and with the input_event struct I can read in a loop the input like this: int connection = open("dev/input/by-id/my-barcode-scanner", O_RDONLY); the scanner acts like a keyboard struct input_event ie[64]; int rd, value, size = sizeof(struct input_event); while ((rd = read(serialPort, ie, size * 64)) > size) { std::cout << "The entered code is: " << ie[1].code << std::endl; } This is of course very simplified. I have different sequences that trigger different actions. This part works just fine. My problem is that not only my program gets the input, also the focused UI. I would like to consume the input of this device, so it is not shown anywhere else. I am used to very "high level" events of Java, where you can simply consume an event or pass it through. So I am very curious if there is something I can do on this very low level to "consume" it. I tried already read and overwrite the content and to change the group of the /dev/input/event file, I thought maybe if it is not in the input group it would not use it, but apparently it is not as easy than that. Looking forward for ideas or anything which makes me understand better. |
How to scan for the Pegasus spyware with Debian/KDE & Android: getting " [mvt.android.modules.adb.base] Unable to connect to the device over USB." Posted: 27 Aug 2021 08:34 AM PDT On my Debian11/KDE machine I ran pip3 install mvt --user then I enabled developer options on my Android smartphone (via tapping on the build-number a few times), enabled USB debugging on it, connected to the computer with the fully functional USB cable that was shipped with the phone, ran mvt-android check-adb and confirmed the popup on the phone asking whether the RSA key of the computer is to be trusted (without checking to always trust it). However, it always fails with this output: MVT - Mobile Verification Toolkit https://mvt.re Version: 1.2.5 INFO [mvt.android.cli] Checking Android through adb bridge INFO [mvt.android.cli] Loaded a total of 0 indicators INFO [mvt.android.modules.adb.chrome_history] Running module ChromeHistory... ERROR [mvt.android.modules.adb.base] Unable to connect to the device over USB. Try to unplug, plug the device and start again. There always has been an issue with establishing an USB connection with my Android phone and keeping the connection up for long even with another smartphone and other USB cables and other USB ports, but it usually works after reconnecting a few times and selecting "Open folder" in the popups of the Dolphin file explorer multiple times (usually not long enough to transfer many files or browse files on the phone conveniently but somewhat good enough). How to solve this? |
Is there a way to check if the script in background process is crashed? Posted: 27 Aug 2021 08:48 AM PDT I am using Ubuntu 20.04 and I want to write a script that checks if ffmpeg command crashed in the background. After the crash happened it should reinitialize that command. Can anyone provide a bash script code for that? |
How to use Conky variable with external command? Posted: 27 Aug 2021 07:39 AM PDT I'm trying to create a workaround for Conky's truncation of usernames with the top user function. Using id -nu $uid , I tried this: ${exec id -nu ${top uid \1}} which leads to "bad substitution" errors. Thinking that it may be because the top function is sending a string instead of an integer, I tried creating a lua function: function conky_uid_to_name(uid) num = tonumber(conky_parse(uid)) name = conky_parse('${exec id -nu ${num}}') return name end but this never works either as I can't get the parser to see my variable. How do I send a variable to the name variable to be properly parsed by Conky? I'm sure there's an easier way of doing this, but I'm not finding many lua examples and their docs are severely lacking. |
Bash interpreting a variable assignment as a command Posted: 27 Aug 2021 07:24 AM PDT I've been trying to do something for a couple of days, and I'm stumped; I keep running into the same problem, no matter how I approach this. I have a text file with 2 columns in it; the first is the variable name, the second is the command to be run, with the output being assigned to the variable in the first column. I use read to assign both columns to their own variables, then put the full expression into a new variable and execute it. No matter how I do it, I always get the expression as a command name and the error "command not found." That's all a little convoluted, so let me show you. The script is: while read varName varCmd do echo varName is $varName echo varCmd is $varCmd declare cmd=$varName=$varCmd echo Command is $cmd "$cmd" echo 1st Value is $varFoo echo 2nd Value is $varBar done < testvars.txt And the text file is: varFoo echo foo varBar echo bar Everything works except the assignment execution itself. Here's what I get: varName is varFoo varCmd is echo foo Command is varFoo=echo foo ./testvars.sh: line 8: varFoo=echo foo: command not found 1st Value is 2nd Value is varName is varBar varCmd is echo bar Command is varBar=echo bar ./testvars.sh: line 8: varBar=echo bar: command not found 1st Value is 2nd Value is It looks like Bash is interpreting the whole thing as one command name (string) and not interpreting the = as an operator. What can I do to get Bash to correctly interpret the assignment expression correctly? |
timedatectl fails to query server Posted: 27 Aug 2021 07:08 AM PDT When running timedatectl to check if my system clock has been synchronized via NTP I get the following: ~> timedatectl Failed to query server: The name org.freedesktop.timedate1 was not provided by any .service files systemd-timedated.service has ran. ~> systemctl status systemd-timedated.service ● systemd-timedated.service - Time & Date Service Loaded: loaded (/usr/lib/systemd/system/systemd-timedated.service; static; vendor preset: enabled) Active: inactive (dead) Docs: man:systemd-timedated.service(8) man:localtime(5) https://www.freedesktop.org/wiki/Software/systemd/timedated Mar 23 14:28:16 cm1sd systemd[1]: Starting Time & Date Service... Mar 23 14:28:16 cm1sd systemd[1]: Started Time & Date Service. Mar 23 14:29:00 cm1sd systemd[1]: systemd-timedated.service: Succeeded. Looking online I haven't found anything talking about this error message. How can I use systemd and timedatectl to have my system clock synchronized with an NTP server? I've also noted nothing under /etc/systemd/ defines the NTP server to use. I'm on an embedded Linux system, built using Buildroot, systemd version 244.5. |
Debian 10 Wifi Problem with MT7630e Posted: 27 Aug 2021 09:04 AM PDT I installed Debian 10 on an Asus notebook. As soon as I installed the Wi-Fi was not working. My WIFI chip is MEDIATEK Corp. MT7630e 802.11bgn Wireless Network Adapter I installed the package firmware-misc-nonfree but Debian still didn't found the WiFi chip so as suggested in many forum I installed this driver https://github.com/neurobin/MT7630E. As soon as I installed it everything worked fine. The problems started when I try to connect to another Wifi. The network manager start searching forever a new Wifi using 100% of one core of the CPU without possibility to stop the process(I tried all possible command for stopping the NetworkManager, like stopping the service and kill the process). I tried to wait until it stop but after hours it never stop. I tried to reboot and shutdown multiple time and the only way to make the Wifi working again is to shoutdown the notebook by hold down the power button. How can I solve this problem? EDIT: The workaround I'm using consist to install the driver(the one from Github) at startup and uninstall before shutdown. It's not a solution but it's the only way to not hold down the power button. |
How to install ocx file to wine Posted: 27 Aug 2021 10:04 AM PDT I am using Mac OS 10.14.5, and I am trying to run an exe file. So I did brew install wine . Then, using wine to run the program yields the following error: 0009:fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.Windows.Common-Controls" (6.0.0.0) My wine can run some other exe programs. Because of this, I thought I needed to install mscomctl.ocx and comctl32.ocx into wine . I copied the files from my Windows 7 computer to ~/.wine/drive_c/windows/system32/ and added the corresponding overrides to the "Libraries" tab in winecfg . But I still got the same error. I tried another way of installing the two files. I did brew install winetricks zenity , and ran sh winetricks dlls . The GUI appeared, but it did nothing no matter what I clicked. Also, whenever I clicked anything on the GUI, it outputs to the shell the same message Gtk-WARNING **: Could not load a pixbuf from /org/gtk/libgtk/theme/Adwaita/assets/bullet-symbolic.svg. This may indicate that pixbuf loaders or the mime database could not be found. So how should I install ocx files to wine ? |
Pass an option to a makefile Posted: 27 Aug 2021 10:04 AM PDT Makefile my_test: ifdef $(toto) @echo 'toto is defined' else @echo 'no toto around' endif Expected behavior $ make my_test no toto around $ make my_test toto toto is defined Current behavior $ make my_test no toto around $ make my_test toto no toto around make: *** No rule to make target `toto'. Stop. When I run make my_test I get the else text as expected no toto around . However make my_test toto no toto around make: *** No rule to make target `toto'. Stop. Makefile version $ make -v GNU Make 3.81 SLE version $ cat /etc/*release VERSION_ID="11.4" PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4" PS The point is to make make my_test verbose if toto , if toto not given then the command will run silently |
Tomcat 8 503 Error with Apache2 mod_jk as Reverse Proxy Posted: 27 Aug 2021 08:06 AM PDT I'm following this guide to setup Tomcat 8 on Ubuntu Server 16.04 using Apache2's mod_jk module as a reverse proxy: https://www.digitalocean.com/community/tutorials/how-to-encrypt-tomcat-8-connections-with-apache-or-nginx-on-ubuntu-16-04 Everything works until the last step, which is to change the HTTP and AJP Connectors in server.xml to only listen on localhost. Here's the change I made to the AJP Connector: <Connector port="8009" address="127.0.0.1" protocol="AJP/1.3" redirectPort="8443" /> Before this change, typing https://myhostname takes me to the Tomcat administration page; after it, I get "503 Service Unavailable". I've temporarily turned off my firewall and removed AppArmor. Here's the relevant portion of mod_jk.log: jk_open_socket::jk_connect.c (817): connect to ::1:8009 failed (errno=111) ajp_connect_to_endpoint::jk_ajp_common.c (1068): (ajp13_worker) Failed opening socket to (::1:8009) (errno=111) ajp_send_request::jk_ajp_common.c (1728): (ajp13_worker) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) What could be causing this, and how can I resolve it? |
systemd service using 100% of my CPU when it doesn't if I start it without systemd Posted: 27 Aug 2021 07:44 AM PDT I'm using Debian Jessie with the latest updates. I made a systemd service to run a script when my server starts. Here's its configuration: [Unit] Description=(my description) [Service] ExecStart=/usr/bin/bot Restart=restart-always [Install] WantedBy=multi-user.target /usr/bin/bot is a script running a Mono executable. It consists of: #!/bin/bash (cd /path/to/my/executable && mono bot.exe) (I replaced the path here, but the one on my script is correct.) When I run the script /usr/bin/bot normally (simply /usr/bin/bot on my terminal), it is working as expected. top reports it's using between 0 and, say, 20% of my CPU, which is normal. But when I start it with service bot start , top says it's always using at least 100% of my CPU. In both cases bot is working as expected. What could explain such a big difference in CPU usage? Thank you. |
Get separate used memory info from free -m command Posted: 27 Aug 2021 09:26 AM PDT As the output of the free -m command, I get the following: total used free shared buffers cached Mem: 2496 2260 236 0 5 438 -/+ buffers/cache: 1816 680 Swap: 1949 68 1881 I want to get only used memory, like 2260, as output. I tried the following command: free -m | grep Mem | cut -f1 -d " " Help me to improve my command. How can I get it as a percentage, like 35%? |
Check whether files in a file list exist in a certain directory Posted: 27 Aug 2021 08:32 AM PDT The runtime arguments are as follows: $1 is the path to the file containing the list of files $2 is the path to the directory containing the files What I want to do is check that each file listed in $1 exists in the $2 directory I'm thinking something like: for f in 'cat $1' do if (FILEEXISTSIN$2DIRECTORY) then echo '$f exists in $2' else echo '$f is missing in $2' sleep 5 exit fi done As you can see, I want it so that if any of the files listed in $1 don't exist in the directory $2 , the script states this then closes. The only part I can't get my head around is the (FILEEXISTSIN$2DIRECTORY) part. I know that you can do [ -e $f ] but I don't know how you can make sure its checking that it exists in the $2 directory. |
MediaTomb - Not showing Cover Art from music folder in the photo collection Posted: 27 Aug 2021 07:03 AM PDT MediaTomb adds album covers and DVD covers into the photo collection. That means, when you are going through your photos, you also get album covers intermixed in there. Is there a way of restricting the type of media from specific folders. So, /path/to/music will be restricted to music, /path/to/photos will be restricted to photos etc? |
No comments:
Post a Comment