Where is the rofi configuration file? Posted: 22 Aug 2021 10:20 AM PDT Where is the rofi configuration file? The various manual/help files are wrong. They state that it is /etc/rofi.rasi but this file does not exist on my system. Furthermore, ~/.config/rofi is empty. The variable XDG_CONFIG_HOME is unset. Obviously rofi is getting application information from somewhere because when it runs, it lists dozens of applications. The question is: where is getting that information from? Note that the "configuration" information I am asking about here is the application startup commands that rofi uses to start applications. How does it know the names and command line startup for the applications that it starts? I am not asking about UI configuration settings, like what color is the menu or things like that. |
SSH file transfer, permission denied Posted: 22 Aug 2021 10:30 AM PDT I am trying to download a file from a target server for a project. Once I have this file I can gain access to the root however when I try to send the file to my kali machine it doesn't work and says connection refused. I've tried changing my network to not be connected to the victims network but that didn't work either. So I am out of options on what to do. I have included the screenshot of the error i get and am happy to add any necessary screenshots. Any help would be greatly appreciated as I am on a tight deadline. Thanks in advance. I have been told i might need to configure the SSH service on the kali machine but i'm not sure how. |
AD authentication failure Ubutnu || Access denied for user 4 (System error) Posted: 22 Aug 2021 09:36 AM PDT Facing this issue tried all the steps available online, but still having issue. need some expert solutions. ssh -vvv view OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 debug1: Reading configuration data /root/.ssh/config debug1: /root/.ssh/config line 2: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 58: Applying options for * debug2: resolving "10.xx.xx.xx" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to 10.xx.xx.xx [10.xx.xx.xx] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.4 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.10 debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.10 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 10.xx.xx.xx:22 as 'admin@domain' debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts" debug3: record_hostkey: found key type ECDSA in file /root/.ssh/known_hosts:262 debug3: load_hostkeys: loaded 1 keys from 10.xx.xx.xx debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ssh-dss debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com,zlib debug2: compression stoc: none,zlib@openssh.com,zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com debug2: compression stoc: none,zlib@openssh.com debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256@libssh.org debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: kex: curve25519-sha256@libssh.org need=64 dh_need=64 debug1: kex: curve25519-sha256@libssh.org need=64 dh_need=64 debug3: send packet: type 30 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug3: receive packet: type 31 debug1: Server host key: ecdsa-sha2-nistp256 SHA256:zlPEDwZal+pYFuwCFeBmY2Mgs5geOuercQB7ZEyDQKc debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts" debug3: record_hostkey: found key type ECDSA in file /root/.ssh/known_hosts:262 debug3: load_hostkeys: loaded 1 keys from 10.xx.xx.xx debug1: Host '10.xx.xx.xx' is known and matches the ECDSA host key. debug1: Found key in /root/.ssh/known_hosts:262 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey after 134217728 blocks debug2: key: /root/.ssh/id_rsa (0x5644a36e9d20) debug2: key: /root/.ssh/id_dsa ((nil)) debug2: key: /root/.ssh/id_ecdsa ((nil)) debug2: key: /root/.ssh/id_ed25519 ((nil)) debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 53 debug3: input_userauth_banner Authorized uses only. All activity may be monitored and reported.debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa: No such file or directory debug1: Trying private key: /root/.ssh/id_ecdsa debug3: no such identity: /root/.ssh/id_ecdsa: No such file or directory debug1: Trying private key: /root/.ssh/id_ed25519 debug3: no such identity: /root/.ssh/id_ed25519: No such file or directory debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password admin@domain@10.xx.xx.xx's password: debug3: send packet: type 50 debug2: we sent a password packet, wait for reply Authentication failed. auth.log error sshd[3850]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.xx.xx.x user=admin@domain sssd_be: GSSAPI client step 1 sssd_be: message repeated 2 times: [ GSSAPI client step 1] sssd_be: GSSAPI client step 2 sshd[3850]: pam_sss(sshd:auth): authentication success; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.xx.xx.x user=admin@domain sshd[3850]: pam_sss(sshd:account): Access denied for user admin@domain : 4 (System error) sshd[3850]: Failed password for admin@domain from 10.xx.xx.x port 55746 ssh2 sshd[3850]: fatal: Access denied for user admin@domain by PAM account configuration [preauth] |
Fastest way to find lines starting with string in gzip file Posted: 22 Aug 2021 09:42 AM PDT I have a flat database based on 65536 files, each one containing one word by line starting with two hexadecimal characters. So like this : afword 46word2 Feword3 ... I make tens of thousands of requests a day on this, so i'm looking for the better way to find a line starting by two hexadecimal characters, the files were sorted before being gzipped. As of now i do : LC=ALL zgrep --text '^af' file Is there any other faster way to do this in perl or bash or any command line? Thanks for enlightenments. |
how do i set a program to run based on file type and not file name [duplicate] Posted: 22 Aug 2021 09:29 AM PDT The end result I require is as follows: when I run an executable file on linux (from terminal or some gui), it must automatically be run by wine if it is a typical windows binary. i know how to set a program to run based on a files extention, but is it possible to do it based on type? in this case the windows format is PE32 Why would I want it this way? so I dont need to type e.g. "wine notepad" for the many commands in various dirs i need to execute |
zsh completion: different completion when stdin is a pipe Posted: 22 Aug 2021 07:52 AM PDT I have a program foo . It can be used in 2 different modes: foo [-c] file1 [file2] or find . -print0 | foo [-0] [-c] In first mode, the only optional argument is -c , and then there are one or more files. In second mode, there are 2 optional arguments -c and -0 , and no files. How can I handle these 2 different modes in zsh completion? |
Resolve MXDomain into specific address [dnsmasq] Posted: 22 Aug 2021 09:37 AM PDT How to redirect all hosts (A-Record) that doesnt not exists, in specific domain, (and its authoritative dns server return NXDomain) into specific IP address? For example, example.org, www.example.org and other -> (return as it is) not-exists.example.org and other MXDomain -> 10.10.10.10 (replace MXDomain with 10.10.10.10) Is it possible with dnsmasq or any other app? |
What is host-specific meaning in RFCs? Posted: 22 Aug 2021 08:27 AM PDT What is host-specific meaning in this RFC ? I know host-specific means to "capable of living solely on or in one species of host, as a parasite that infests only chickens.". But i can't understand why host-specific related to dhclient. The Dynamic Host Configuration Protocol (DHCP) provides configuration parameters to Internet hosts. DHCP consists of two components: a protocol for delivering host-specific configuration parameters from a DHCP server to a host and a mechanism for allocation of network addresses to hosts. |
Is it possible to cause a interpreter infinite loop? Posted: 22 Aug 2021 08:12 AM PDT I'm considering a possible denial of service attack scenario, where a script cause a system resource outage by recursively invoking itself as interpreter. The principle is as follow: The script specifies at its first line, in the form of a #! shabang, the absolute path of itself, as its own interpreter. The system kernel will, depend on its support, automatically invoke the interpreter during the execve system call, prepending the interpreter, to the vector of arguments. Such invocation will exhaust the limit on the size of program arguments ({ARG_MAX} ) set in the system, thus causing a (possibly isolated) failure. Experiment I've created 2 different set of attack vectors, - The first one, invoking itself
#!/usr/local/bin/recurse - The second one, invoking each other.
#!/usr/local/bin/recurse-1 #!/usr/local/bin/recurse-2 I've tested these 2 attack vectors on macOS Big Sur 11.5.2. And when I check the exit status using echo $? , it shows 0, which means the processes completed successfully. Question. Had modern operating systems been patched against such attack? Are there research papers on this? |
Where are my inodes used (advanced, debugfs, etc) Posted: 22 Aug 2021 08:13 AM PDT I made a copy of an ext4 volume with many errors and fscked it, before making an extensive comparison in order to make sure no data was lost in the process. since there were all kind of codings in the filenames, I decided to work by inode instead of by filename and use debugfs and md5sum to compute a control sum for every non zeroed inodes (some of them being nonzeroed but marked as free). Then I jointed the result with the list on files (and dirs, etc) by inodes, and made sure that the inner join was the same on both filesystems. However, to my big surprise, I also found in the remain (outer join less inner join), 3 inodes which seem to have been allocated during the repair process (they were marked used on the fscked copy and not on the original and free on the original errored volume), and I can't figure out, what they are related with : root:~# for F in md0p2.nomatch md0p99.nomatch ; do grep used $F; done | sort -n | uniq -u | less 279838261 used 5e8519e3eea8bc75915e03caa75dadf9 279838289 used e43d2402bf4355df67a59ac5220711d4 279838348 used d41d8cd98f00b204e9800998ecf8427e root:~# egrep '279838261|279838289|279838348' md0p2.md5.saved md0p99.md5.saved md0p2.md5.saved:(279838261, free, 'd41d8cd98f00b204e9800998ecf8427e') md0p2.md5.saved:(279838289, free, 'd41d8cd98f00b204e9800998ecf8427e') md0p2.md5.saved:(279838348, free, 'a674e48d26a9c2827d6de2791b420b77') md0p99.md5.saved:(279838261, used, '5e8519e3eea8bc75915e03caa75dadf9') md0p99.md5.saved:(279838289, used, 'e43d2402bf4355df67a59ac5220711d4') md0p99.md5.saved:(279838348, used, 'd41d8cd98f00b204e9800998ecf8427e') root:~# debugfs /dev/md0p99 debugfs 1.44.5 (15-Dec-2018) debugfs: ncheck <279838261> ncheck: Bad inode - <279838261> debugfs: ncheck <279838289> ncheck: Bad inode - <279838289> debugfs: ncheck <279838348> ncheck: Bad inode - <279838348> debugfs: q any idea what else I could try to investigate it ? EDIT (answer to Icarus's question) : debugfs: inode_dump <279838261> 0000 8967 075e 545e 612a 6127 db67 c53d 0ef4 .g.^T^a*a'.g.=.. 0020 6c2a 636c dec3 cb60 c670 0000 6639 6639 l*cl...`.p..f9f9 0040 0000 0000 8975 9a50 6a70 6b6d 6d2f a466 .....u.Pjpkmm/.f 0060 f27f a7b0 6e6a 7663 7760 ed6a fa2d e9e0 ....njvcw`.j.-.. 0100 7d6e 715d 6736 aa68 e73d bde1 632a 767b }nq]g6.h.=..c*v{ 0120 736c bf39 e477 bc08 785b 4468 252f aa69 sl.9.w..x[Dh%/.i 0140 ef72 869e 695f 665f 2d6d e370 b039 8825 .r..i_f_-m.p.9.% 0160 366c 2c72 6c7a eb70 b97f f033 6962 2a6f 6l,rlz.p...3ib*o 0200 2a20 b82c a83e eec8 7b61 3f3f 3e6e f46e * .,.>..{a??>n.n 0220 be4f d287 2d22 2a6f 3e6b ac61 ae4b efd9 .O..-"*o>k.a.K.. 0240 2560 6376 6220 ae61 f477 503f 6477 6b76 %`cvb .a.wP?dwkv 0260 6a66 ec2a aa76 5b2f 222d 3963 3e68 ac55 jf.*.v[/"-9c>h.U 0300 b73e 27c7 2562 6d6c 775a eb7e da2b 528e .>'.%bmlwZ.~.+R. 0320 7339 6c77 6f65 bf39 8a6a c893 257c 3867 s9lwoe.9.j..%|8g 0340 386a ac7d 8b61 96b2 6739 3f3f 6536 e12a 8j.}.a..g9??e6.* 0360 cd35 a957 6e39 612c 7b24 e92e d334 ded7 .5.Wn9a,{$...4.. debugfs: stat <279838261> invalid inode->i_extra_isize (8234) Inode: 279838261 Type: block special Mode: 03611 Flags: 0x0 Generation: 1600544617 Version: 0x61ac6b3e:509a7589 User: 2142854663 Group: 871395526 Project: -638628946 Size: 711024212 File ACL: 124156513578285 Links: 0 Blockcount: 134605238057318 Fragment: Address: 1915513910 Number: 0 Size: 0 ctime: 0xf40e3dc5:c8ee3ea8 -- Mon Aug 26 16:12:05 1963 atime: 0x67db2761:6ef46e3e -- Wed Jun 2 11:18:25 2297 mtime: 0x6c632a6c:3f3f617b -- Thu Dec 6 14:09:00 2435 crtime: 0x87d24fbe:6f2a222d -- Mon Mar 17 23:12:14 2042 dtime: 0x60cbc3de:(c8ee3ea8) -- Thu Jun 17 23:51:26 2021 Size of extra inode fields: 8234 Device major/minor number: 112:106 (hex 70:6a) debugfs: inode_dump <279838289> 0000 7744 2a7d b82c a7b5 8b36 2c65 746e 1b14 wD*}.,...6,etn.. 0020 1433 61e6 dec3 cb60 2e08 0000 237d 1529 .3a....`....#}.) 0040 0000 0000 2565 2422 0f38 3b3a 5fbb 5edf ....%e$".8;:_.^. 0060 786c 6b71 280c 6232 2c63 bfdc af63 762c xlkq(.b2,c...cv, 0100 4e30 4424 792b 3bad 8b60 246f 3d65 7202 N0D$y+;..`$o=er. 0120 3d10 8e4a e205 3065 772e 6628 5862 2e86 =..J..0ew.f(Xb.. 0140 71ba 06c6 5768 2c6e 6106 2c3b c02d f801 q...Wh,na.,;.-.. 0160 2425 3e64 2422 4028 72c1 76a3 3e27 7a2c $%>d$"@(r.v.>'z, 0200 6e2e 621d 252c f369 a66a e56e 2d79 7061 n.b.%,.i.j.n-ypa 0220 1c3e 3d34 20a9 6d20 7f5f 605f 7941 627e .>=4 .m ._`_yAb~ 0240 cc66 e943 8d62 6376 6a22 4e29 3882 7398 .f.C.bcvj"N)8.s. 0260 67af 6a2c 766a 6a1b 7e67 b668 59c3 7e76 g.j,vjj.~g.hY.~v 0300 716d 7146 1e36 4a61 80b8 223f 643f 6146 qmqF.6Ja.."?d?aF 0320 346d 2672 14a4 6f2a 5160 3805 6f63 8f65 4m&r..o*Q`8.oc.e 0340 e32a 7e77 6766 2d0c 3375 0b6f 6799 7e68 .*~wgf-.3u.og.~h 0360 6e23 3e05 6d60 6847 0825 562a 676a 254e n#>.m`hG.%V*gj%N debugfs: stat <279838289> invalid inode->i_extra_isize (11886) Inode: 279838289 Type: directory Mode: 02167 Flags: 0x0 Generation: 1848404055 Version: 0x5f605f7f:22246525 User: -1049461462 Group: -1552545746 Project: 2120368505 Size: -1247335240 File ACL: 44256335758945 Links: 0 Blockcount: 37538703441187 Fragment: Address: 1681794340 Number: 0 Size: 0 ctime: 0x141b6e74:69f32c25 -- Fri Oct 16 18:36:04 2116 atime: 0x652c368b:6170792d -- Thu Nov 22 02:27:39 2159 mtime: 0xe6613314:6ee56aa6 -- Fri Aug 1 15:37:24 2228 crtime: 0x343d3e1c:206da920 -- Thu Oct 9 22:27:08 1997 dtime: 0x60cbc3de:(69f32c25) -- Mon Jul 25 06:19:42 2157 Size of extra inode fields: 11886 BLOCKS: (0):976959503, (1):3747527519, (2):1902865528, (3):845286440, (4):3703530284, (5):745956271, (6):608448590, (7):2906336121, [...] (1033):3110968555, (1034):1757067253, (1035):264071090, (DIND):2251186776, (TIND):3322329713 TOTAL: 1038 debugfs: inode_dump <279838348> 0000 e17b 1d1a 0000 0000 644c 5977 c847 5cd8 .{......dLYw.G\. 0020 6a2a 6466 da5f cc60 f122 0000 0000 0000 j*df._.`."...... 0040 0000 0000 f020 8cb1 3076 6776 7648 5d24 ..... ..0vgvvH]$ 0060 ee73 a2ee 7d6e 715d 4270 1d74 f561 e55d .s..}nq]Bp.t.a.] 0100 7f7d 7267 2d53 5d70 e57c 150e 6765 7667 .}rg-S]p.|..gevg 0120 3e5c 466a 1f7a 55fb 652c 632b 784c 5276 >\Fj.zU.e,c+xLRv 0140 5b6c 1462 636d 712c 0000 0000 0000 0000 [l.bcmq,........ 0160 4160 2b79 0000 4124 b22d c781 6567 766b A`+y..A$.-..egvk 0200 1c00 1b65 e073 a2e7 7961 7677 7154 1363 ...e.s..yavwqT.c 0220 bd7a faca 414e 2a63 2f59 1f66 e65e 5eb1 .z..AN*c/Y.f.^^. 0240 3b28 2333 2a47 0863 b37a 1b0c 7c2c 602c ;(#3*G.c.z..|,`, 0260 495e 1f62 b57e 5a3a 626b 6c2a 6116 572d I^.b.~Z:bkl*a.W- 0300 c438 54d8 7f6c 6b71 2d68 4165 e674 46ad .8T..lkq-hAe.tF. 0320 2d22 6574 6949 6c4f f338 6db0 3b59 2e66 -"etiIlO.8m.;Y.f 0340 580b 6e28 d223 c39d 6a2d 7f2e 7752 5a77 X.n(.#..j-..wRZw 0360 8672 fdfd 2366 2c5b 6013 4872 cf7d aec9 .r..#f,[`.Hr.}.. debugfs: stat <279838348> debugfs: Inode: 279838348 Type: bad type Mode: 05741 Flags: 0x0 Generation: 745631075 Version: 0x661f592f:b18c20f0 User: 766646813 Group: -2117655823 Size: 0 File ACL: 39861591474176 Links: 0 Blockcount: 0 Fragment: Address: 2032885825 Number: 0 Size: 0 ctime: 0xd85c47c8:e7a273e0 -- Sat Dec 4 19:24:08 1948 atime: 0x77594c64:63135471 -- Fri Jul 21 14:17:40 2169 mtime: 0x66642a6a:77766179 -- Tue Jul 15 18:23:06 2160 crtime: 0xcafa7abd:632a4e41 -- Mon Nov 29 13:04:13 2077 dtime: 0x60cc5fda:(e7a273e0) -- Fri Jun 18 10:56:58 2021 Size of extra inode fields: 28 BLOCKS: (0):1986491952, (1):610093174, (2):4003623918, (3):1567714941, (4):1948086338, (5):1575313909, (6):1735556479, (7):1885164333, [...] (306179):592454480, (306180):4793153, (306181):25299970, (306182):2013266074, (306183):394777, (TIND):1645505627, (DIND):3120627712 TOTAL: 3082 |
What "r8169 can't disable ASPM" means and how should I fix it? Posted: 22 Aug 2021 09:21 AM PDT I found this in my kernel logs: kernel: r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control What does it mean and how should I fix it? lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.04 Release: 21.04 Codename: hirsute |
How can I print a '@' in a Linux shell? Posted: 22 Aug 2021 10:32 AM PDT I have a problem with a remote server having a keyboard layout in the console different from my physical keyboard. I need to copy a '@' letter to be able to paste in a browser forum. The server is in a VPN without external access, so a simple googling for 'at symbol' doesn't work. Is there some trick to have a @ printed in the console so I can copy and paste it? Is there a well-known file to simply do a cat and show a @ inside it? A README or similar. |
Btrfs, compressed or not compressed, that is the question Posted: 22 Aug 2021 09:53 AM PDT I have mounted Btrfs subvolumes (including /home ) with the compress=no option in /etc/fstab . However, when I run btrfs inspect-internal dump-super -a <device_name> (both, on the running system, as well as on a live boot and mounting with compress=no ), it shows COMPRESS_ZSTD in incompat_flags . So, are the subvolumes being used without compression, or with compression? Fedora 34 Workstation (GNOME), fresh install. This seems to default to zstd for at least the /home subvolume, which was not the case earlier, but is compression actually enabled despite being mounted with compress=no , as shown by inspect-internal ? The partition containing the subvolumes is LUKS2-encrypted. |
Odd Issue with Sessions Originating form NATed Network Posted: 22 Aug 2021 10:04 AM PDT I am having an odd Apache issue where one device of two devices on the same remote network will timeout at 60 seconds. I have been able to extend that to two minutes by setting TimeOut to 120 seconds, but the second device is still loosing connection with the server. A get a generic 'Could not open the page as the server stopped responding' (Safari). My test is to load a simple page (php script that only prints date) on my desktop and on my iPad Pro. I wait exactly 60 seconds and refresh the iPad and I get the error. This result can be replicated on many end-user networks. My first thought was Apache needed to maintain a session for each entity on the remote network as Apache would see them originating from the same IP, but mod_session did not have any effect. The target server is an AWS Instance running Centos 7, Apache 2.4 (fully patched) and PHP 5.6.4. (I know, there is a project underway to update to PHP 7.X). I will note that this was not happening on a Centos 6 server running Apache 2.2, and the existing environment only has 4 or less users as it is a development environment. Any help would be greatly appreciated. Settings: httpd.conf: Timeout 300 KeepAlive On Prefork settings: StartServers 8<br> MinSpareServers 5<br> MaxSpareServers 20<br> ServerLimit 256<br> MaxClients 256<br> MaxRequestsPerChild 4000<br> MaxConnectionsPerChild 0<br> Some additional notes: This only seems to happen with PHP files, even those files that contain only text. |
zfs send performance Posted: 22 Aug 2021 08:46 AM PDT I'm having an odd performance issue when trying to backup my zfs filesystems. I can tar the contents of a zfs filesystem at 100+ MB/s, but zfs send trickles the data at maybe 5 MB/s. The filesystem only has 5 or 6 snapshots. A tar takes about 1.5 hours. A zfs send takes more than 12 hours! In both cases, the destination is a file on another pool. (i.e. zfs send tank/myfs > /backup/myfs.zfsbackup vs. tar -cf /backup/myfs.tar ./myfs ) My first thought was fragmentation, but if that was the case, wouldn't tar be just as slow? I'm getting decent enough overall disk performance, but my backups are literally taking forever. I'm running Solaris 11.4 on x64 hardware. Conceptually, the issue may be similar to zfs on Linux, but I'm not that familiar with the Linux variant. I ran the dtrace script provided in the answer below for approx 12 minutes while a zfs send was running. dtrace -i 'profile:::profile-1001hz /arg0/ { @[ stack() ] = count(); }' I'm not sure how to interpret the results. There were two sections of summary that contained a good number of zfs calls: zfs`zfs_fletcher_4_native+0x79 zfs`zfs_checksum_compute+0x181 zfs`zio_checksum_compute+0x1d6 zfs`zio_checksum_compute_dispatch+0x28 zfs`zio_checksum_generate+0x59 zfs`zio_execute+0xb4 genunix`taskq_thread+0x3d5 unix`thread_start+0x8 1041 unix`bcopy+0x55a genunix`uiomove+0xb3 zfs`dmu_xuio_transform+0x83 zfs`zfs_write+0x78a genunix`fop_write+0xf5 genunix`vn_rdwr_impl+0x1f3 genunix`vn_rdwr_uiov+0x63 zfs`dump_buffer_flush+0x8e zfs`dump_buffer_append+0x85 zfs`dump_bytes_impl+0x49 zfs`dump_bytes+0x49 zfs`dump_record+0x190 zfs`dump_data+0x26a zfs`backup_cb+0x4b5 zfs`traverse_visitbp+0x3df zfs`traverse_visitbp+0x8e4 zfs`traverse_visitbp+0x8e4 zfs`traverse_dnode+0x1dc zfs`traverse_visitbp+0x6d2 zfs`traverse_visitbp+0x8e4 1183 The highest number of calls seem to be cpu idle calls... unix`mach_cpu_idle+0x17 unix`cpu_idle+0x2b7 unix`cpu_idle_adaptive+0x19 unix`idle+0x11e unix`thread_start+0x8 1147665 unix`mach_cpu_idle+0x17 unix`cpu_idle+0x2b7 unix`cpu_idle_adaptive+0x19 unix`idle+0x11e unix`thread_start+0x8 2462890 During the zfs send, the drives are busy, but there aren't any waits and I don't think service times are all that bad... extended device statistics r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 157.0 0.0 4.9 0.0 0.0 1.6 0.0 10.5 0 77 c0t5000C500A22D9330d0 154.0 0.0 4.9 0.0 0.0 1.7 0.0 11.0 0 82 c0t5000C500A232AFA6d0 186.0 0.0 6.4 0.0 0.0 2.4 0.0 12.7 0 93 c0t5000C500A24AD833d0 185.0 0.0 6.3 0.0 0.0 1.8 0.0 9.9 0 79 c0t5000C500A243C8DEd0 During a tar, disk usage seems to be fairly similar... i.e. r/s, service times, %busy, etc, and yet the amount of data being read is vastly different: extended device statistics r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 158.0 0.0 33.3 0.0 0.0 1.9 0.0 11.9 0 86 c0t5000C500A22D9330d0 190.0 0.0 31.9 0.0 0.0 1.6 0.0 8.3 0 75 c0t5000C500A232AFA6d0 170.0 0.0 37.1 0.0 0.0 1.7 0.0 9.7 0 80 c0t5000C500A24AD833d0 168.0 0.0 38.4 0.0 0.0 1.7 0.0 10.1 0 80 c0t5000C500A243C8DEd0 |
Looping through the directories Posted: 22 Aug 2021 08:43 AM PDT paths=$1 files=$2 for dir in ${paths[@]} do newdir=${dir##*/} newpath=(/pathname) val=`mkdir -p ${newpath}/${newdir}` echo ${val} for file in "${dir}"/*; do if [[ -f $file && $file = *.@(c|cc|cpp|h) ]]; then cp ${file} ${val} fi done done I want to loop through the directories and copy the filenames in the newly created val directory..but I am unable to print the value of val. What am I doing wrong? |
unable to connect to ssh in winscp (a sftp application) Posted: 22 Aug 2021 08:23 AM PDT im not able to connect my server using host ip and password in winscp on port 22 and throwing error, Network error and The server rejected SFTP connection, but it listens for FTP connections. Did you want to use FTP protocol instead of SFTP? Prefer using encryption. |
Adding a command to break long lines into shorter ones to an ed script Posted: 22 Aug 2021 08:44 AM PDT I frequently have long lines in my ed document which I would like to split into separate lines of max length. I found this command which achieves this: fold -s -w80 file Split very long lines I can apply this command also from within ed as follows: !fold -s -w80 % However, when I add this command to my ed script, which comprises the following lines, g/\(''\|''\)/s//"/g g/\("\|"\)/s//"/g g/\('\|'\)/s//'/g g/\*/s///g g/^#.*: /s///g g/^ */s/// g/ *$/s/// g/ */s// /g e !uniq % e !fold -s -w80 % # g/^$/d w Q I get an error. I would like to know the reason for the error and a way if circumventing it. |
Dualboot: Other OS can't get DHCP lease after Windows was booted Posted: 22 Aug 2021 07:45 AM PDT I'm dualbooting Windows and Linux on my machine for about 2 years now and never had a problem with dhcp conflicts. After the recent Windows 20H2 update I suddently can't get dhcp on Linux to work if Windows was booted before. This doesn't seem to be a timing problem because I got the same result after waiting for a few days. I'm using an AVM Fritz Box as my router/dhcp server and the only way to get dhcp to work on Linux was to reset the Fritz Box after which it worked immediately. I was using the broader term "Linux" before because I tested it with various distributions (Arch, Gentoo, Ubuntu) and none of them could get their respective dhcp client to work with my Fritz Box. I even tried FreeBSD to rule out a problem with Linux. All of them printed some form of "DHCP lease expired could not get IP". After resetting my router and getting a dhcp lease again I started Windows, then tried to boot a *nix again and got the same problem. I honestly don't know what could be the cause of this because as I said before it worked before upgrading my Windows 10 to the latest version and it doesn't make sense to me that my dhcp server suddenly refuses to work after answering a dhcp request to Windows 10. EDIT: My Mainboard is an ASUS Sabertooth Z87 with an Intel Intel I217-V NIC. As user A.B. correctly suspected the issue wasn't about getting a DHCP lease but rather a problem with the state of the I217-V NIC after shutting down Windows. See this post for the solution: https://unix.stackexchange.com/a/620766/442856 |
Creating VM's using kvm. Error: Unit libvirtd.service could not be found? Posted: 22 Aug 2021 09:00 AM PDT TLDR; On Ubuntu 20.04.1, I am trying to run VM's using KVM. After installing the required packages, I still get below error: sudo systemctl status libvirtd Unit libvirtd.service could not be found. The below is what I have done - a) check kvm support $ sudo kvm-ok INFO: /dev/kvm exists KVM acceleration can be used b) install required packages $ sudo apt install -y qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager Now, the above should be everything and I must be able to open the virt-manager gui and get going and the libvertd service should have been started already. But, there is no libvirtd service running on my machine still and there is no libvirtd.service unit installed. And obviously, The virt-manager is not able to connect to the demon so the below errors - After doing $ sudo virt-manager --> the virt-manager GUI starts with root permissions. Clearly the window says - The libvirtd service does not appear to be installed. Install and run the libvirtd service to manage virtualization on this host. And obviously, no VM creation is feasible and below is the error on attempting for same. Reference - How to Install KVM on Ubuntu 20.04 Note: This issue is not duplicate with - KVM Virt-Manager Error: No active connection to Install on (one answer here ask to install - libvirt-bin , but this packages does not exist in the repo. $ apt list libvirt-bin Listing... Done Hardware: This attempt is on Quad-Core, Intel CPU Laptop. Edit - Following the comment from ajgringo619, i could solve the libvirtd.service issue. But other issues still persist. Below is also posted as seperate Q @ Warning: KVM kernal modules are not loaded. Your VM may perform poorly? This is the lsmod output - $ sudo lsmod | grep kvm kvm_intel 282624 0 kvm 663552 1 kvm_intel Should I ignore the warning, is the performance really gonna be poor?? |
Auto-mount cryfs on startup Posted: 22 Aug 2021 10:31 AM PDT cryfs requires a password/passphrase entry for mounting a filesystem. I want to automatically mount a FS at startup (like calling a script from inside of rc.local to do the job). (the encrypted file system to be mounted already exists) Can i give the password as a hash? I do not want to save it plain text in my startup script. Any idea, how to overcome this issue? thx in advance! Linux system... |
Removing blackarch completely from system Posted: 22 Aug 2021 10:17 AM PDT I've installed BlackArch like an idiot and not too long ago I tried to remove all files but there are still some crumbs left of it. I tried to update the packages through the terminal and this is what I got: sudo pacman -Syyu :: Synchronizing package databases... core 148.9 KiB 242K/s 00:01 [######################] 100% extra 1759.7 KiB 296K/s 00:06 [######################] 100% community 5.3 MiB 568K/s 00:10 [######################] 100% multilib 183.2 KiB 1263K/s 00:00 [######################] 100% blackarch 2.7 MiB 752K/s 00:04 [######################] 100% blackarch.sig 566.0 B 0.00B/s 00:00 [######################] 100% error: blackarch: signature from "Levon 'noptrix' Kayan (BlackArch Developer) <noptrix@nullsecurity.net>" is invalid error: failed to update blackarch (invalid or corrupted database (PGP signature)) error: failed to synchronize all databases How do I completely remove all instances of BlackArch from my computer? I don't want it to consistently look for its package updates! I tried the following: paclist blackarch | cut -d' ' -f1 | xargs sudo pacman -R checking dependencies... error: failed to prepare transaction (could not satisfy dependencies) :: bind-tools: removing geoip breaks dependency 'geoip' :: cryptsetup: removing argon2 breaks dependency 'argon2' :: gnome-color-manager: removing exiv2 breaks dependency 'exiv2' :: gnome-nettool: removing iputils breaks dependency 'iputils' :: libgexiv2: removing exiv2 breaks dependency 'exiv2' :: php: removing argon2 breaks dependency 'argon2' |
Automatic module signing for distribution in Linux Posted: 22 Aug 2021 10:01 AM PDT I'm new to writing Linux modules (drivers) and digital signatures, so please correct me if any of my understanding is incorrect. When I run make modules_install on my module, I get the following error (veikk is the module name): At main.c:160: - SSL error:02001002:system library:fopen:No such file or directory: ../crypto/bio/bss_file.c:72 - SSL error:2006D080:BIO routines:BIO_new_file:no such file: ../crypto/bio/bss_file.c:79 sign-file: certs/signing_key.pem: No such file or directory I was looking up tutorials on signing modules, but I was very confused about how to distribute a signed module. There are tutorials for manually signing modules (e.g., this, this, this), but these all seem to be post-installation and involve generating and registering a key with the kernel. It seems that the kernel wants to automatically sign the module on installation using certs/signing_key.pem (hence the error). Using the advice provided by this Unix Stack Exchange question, I was able to get rid of the error. This generates the x509.genkey file, and then creates the signing_key.pem and signing_key.x509 files in the certs directory in the kernel directory. printf "[ req ]\ndefault_bits = 4096\ndistinguished_name = req_distinguished_name\nprompt = no\nstring_mask = utf8only\nx509_extensions = myexts\n\n[ req_distinguished_name ]\nCN = Modules\n\n[ myexts ]\nbasicConstraints=critical,CA:FALSE\nkeyUsage=digitalSignature\nsubjectKeyIdentifier=hash\nauthorityKeyIdentifier=keyid" > x509.genkey openssl req -new -nodes -utf8 -sha512 -days 36500 -batch -x509 -config x509.genkey -outform DER -out $(BUILD_DIR)/certs/signing_key.x509 -keyout $(BUILD_DIR)/certs/signing_key.pem After running this and make modules_install , the module seems to install correctly. The output of modinfo veikk seems to show a valid signature: filename: /lib/modules/5.1.5-arch1-2-ARCH/extra/veikk.ko.xz license: GPL srcversion: A82263B16A25C763382D8B9 alias: hid:b0003g*v00002FEBp00000003 alias: hid:b0003g*v00002FEBp00000002 alias: hid:b0003g*v00002FEBp00000001 depends: hid retpoline: Y name: veikk vermagic: 5.1.5-arch1-2-ARCH SMP preempt mod_unload sig_id: PKCS#7 signer: Modules sig_key: 27:E8:FC:4A:4E:15:0C:AF:40:D5:A1:A4:10:E5:B5:55:BF:AF:EB:66 sig_hashalgo: sha512 signature: AC:AF:49:16:D4:AD:D9:7B:C5:52:A5:9F:F8:46:1C:DF:93:71:05:00: 4D:BF:96:96:3C:D1:11:19:6F:AC:D5:27:7D:E3:EE:8D:6C:BB:17:F4: 53:D3:FD:EE:85:22:97:57:BB:27:23:9C:8A:04:79:75:99:C4:A0:E6: 29:AF:20:15:87:EA:41:D2:26:00:2B:A1:39:68:28:FE:05:F5:F1:B1: 42:F8:FF:66:C0:6C:B5:17:A1:E7:F4:65:0A:17:64:99:9E:11:86:C0: 94:E7:D5:83:59:50:BE:0D:33:B8:A2:64:66:4F:70:A3:EB:E4:FB:B4: 52:D9:26:9C:57:CC:0D:D6:53:51:C2:90:D6:51:13:83:B6:22:EC:C9: DF:15:1D:1E:34:BD:7A:2D:8F:13:2D:78:8C:D3:EA:43:0B:6C:8D:DA: 9A:DA:A1:74:03:FC:D8:72:D0:96:54:52:60:AB:7A:BB:3C:D0:F4:8C: B7:92:21:B1:D8:02:01:6B:9B:AD:11:1A:90:5B:21:94:12:B7:5A:15: 10:6B:92:FA:74:F5:49:A2:4A:65:FF:4E:B6:9B:08:7B:BD:E5:85:9D: 98:52:A2:E4:D7:B4:0D:90:0D:62:7E:CE:6B:F8:8B:0C:33:76:1E:01: C7:0D:29:8C:97:BC:E1:35:58:2B:55:3F:6E:D9:36:46:50:76:74:67: 1F:B2:F6:C3:6B:24:4D:C1:7E:8D:14:4D:10:2D:1D:80:3C:82:02:1C: A6:87:14:8B:A0:3C:21:EA:DD:A7:CD:9C:D0:1B:DF:84:53:BF:0A:B6: DA:50:C4:AA:FF:90:44:47:4B:9F:8A:1C:C3:14:5D:A3:B5:A4:5F:6F: E1:E0:E2:51:B1:1E:5C:7E:95:70:72:76:3A:9D:53:10:F5:F0:3F:CD: E5:2B:EF:E4:3D:DB:64:65:9B:AE:E6:23:6E:4E:F1:4B:94:17:FF:FF: 06:A0:79:84:E1:BE:24:9D:93:B9:D4:94:41:76:92:D5:5B:8F:F6:4F: 98:B9:24:6F:01:CD:4F:49:52:15:48:79:4A:F3:46:CF:8A:AC:21:A9: 64:81:AC:01:15:80:06:F4:C3:9D:8A:C0:48:A6:53:C5:81:C2:DD:B1: C6:B9:80:B8:A9:C2:89:B8:20:C5:89:81:90:15:86:78:F7:09:3F:FD: F6:AC:54:57:8C:E0:B4:62:E0:78:CB:59:63:FA:E6:E2:8C:78:59:31: 92:E5:B5:E3:75:FE:F6:8F:82:3B:D6:5B:B1:84:E9:A8:9E:A4:B0:03: 99:8D:41:55:FF:11:A8:B6:A3:B9:EA:1D:5C:58:F7:D2:A6:F4:3A:C9: B1:E6:83:10:B7:E5:E4:15:28:2C:62:96 My question: Is this a recommended (and safe) way to sign a driver? Preferably, I would like to have end users not have to worry about the hassle of signing the drivers themselves when installing. Because my understanding is a little muddy, here are a few questions I don't understand: - Is this automatic signing on build as secure as the tutorials above for manually signing a driver after installation? I.e., I'm generating a key to sign it with, but that key never (at least explicitly) is loaded into the kernel.
- How are drivers normally distributed and signed? I would expect large companies with proprietary drivers for Linux to have their modules signed some way, such as Nvidia.
- Is there a way to pre-sign a module (on my end)? This seems unlikely because the module should be built for any system it's to be used on.
I would like to keep Secure Boot on (disabling it allows the unsigned module to load, but clients would prefer to have Secure Boot on). |
Firefox fullscreen animation is too slow Posted: 22 Aug 2021 09:15 AM PDT The firefox fullscreen animation is too slow for me. I have a relatively small screen, so I want a quick way to fill my screen with content. How do I make the animation faster? Version: firefox-58.0.2-1.fc27.x86_64 (Fedora Linux 27) Additional clues I have two separate user accounts. One of them is fine, but the other is too slow. In one of the user accounts, I have notes suggesting that I set browser.fullscreen.animateUp as mentioned by this article. However, this setting can no longer be found in about:config . In both user accounts, I use GNOME and have enabled the Impatience extension to GNOME, set to 0.66 of the default delay. I have notes suggesting that this was also a very useful step to help with Firefox fullscreen animation specifically. I can see no other settings in Impatience. Setting the delay to zero does not fix my problem. |
Using Netcat but client refused. why? Posted: 22 Aug 2021 09:35 AM PDT server side: nc -l -p 192.168.1.229 1234 client side: nc 192.168.1.229 1234 but it cannot connect. why? |
Redirect web server from port 5000 to port 80 on localhost (Fedora, firewall-cmd) Posted: 22 Aug 2021 08:04 AM PDT On Fedora 24, a web server (Node.js) is running (standalone, no apache/nginx) on port 5000. http://localhost:5000 works How to make it accessible on port 80? Tried this systemctl restart firewalld firewall-cmd --add-service=http --permanent firewall-cmd --add-masquerade --permanent firewall-cmd --add-forward-port=port=80:proto=tcp:toport=5000 firewall-cmd --list-all FedoraWorkstation (active) target: default icmp-block-inversion: no interfaces: wlp3s0 sources: services: mdns ssh dhcpv6-client samba-client https http ports: 1025-65535/tcp 1025-65535/udp protocols: masquerade: yes forward-ports: port=80:proto=tcp:toport=5000:toaddr= source-ports: icmp-blocks: rich rules: Additional info Tried all the above with --zone=external too Running node as root on port 80 works. Note, there's no IPv4: netstat -tpln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 3977/deluge-gtk tcp 0 0 0.0.0.0:51157 0.0.0.0:* LISTEN 3977/deluge-gtk tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 900/postgres tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 3203/dropbox tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 3203/dropbox tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 3203/dropbox tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::4433 :::* LISTEN 3977/deluge-gtk tcp6 0 0 :::51157 :::* LISTEN 3977/deluge-gtk tcp6 0 0 :::5432 :::* LISTEN 900/postgres tcp6 0 0 :::17500 :::* LISTEN 3203/dropbox tcp6 0 0 :::34017 :::* LISTEN 10532/code tcp6 0 0 :::5858 :::* LISTEN 30394/node tcp6 0 0 :::5000 :::* LISTEN 30394/node |
should I disable both systemd and sysvinit to disable a service from run level? Posted: 22 Aug 2021 09:19 AM PDT My application should run on systems which run on systemd and older platforms where systemd is not available. So I am registering my service into run level using both chkconfig and systemctl enable. What should I do to disable my service from run level? Should I disable using both systemctl and chkconfig? |
Avahi on FreeBSD: Machine is Seen but Does Not See Posted: 22 Aug 2021 10:19 AM PDT I have a FreeBSD 10.3 box with Avahi 0.6.31 which is visible to the other machines on my network, but which is itself unable to resolve any names in the .local domain. That is to say, all the other machines show up in avahi-browse and avahi-resolve-host-name , but getent hosts <hostname> returns nothing. I have two other boxen on the same network: one Ubuntu 14.04 with Avahi 0.6.31, and one OSX 10.4 with mDNSResponder, both of which can resolve the FreeBSD box. Both Avahi machines have identical avahi-daemon.conf files, and each machine's nsswitch.conf contains the line hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 What have I missed? |
Fresh Debian install. What files do I restore from a backup? Posted: 22 Aug 2021 09:07 AM PDT I just installed Debian 8.2 Jessie fresh on a new machine. I ran tar -zxvpf myBackup.tar.gz to extract folders from a backup from my old machine, which also ran Debian 8.2 Jessie. The folders I extracted are etc , home , root , usr and var . What files from each do I copy over to my new install? |
Add ssh pubkey to authorized_keys on local host (skipping existent) Posted: 22 Aug 2021 07:40 AM PDT I needed that to add jenkins pubkey to my host's authorized_keys when starting a docker container with jenkins. Looked for solutions, but could not find ready at internet. May seem obvious, but not for me at least :) |
No comments:
Post a Comment