How to properly use tail to concatenate all hidden files Posted: 23 Jun 2021 09:26 AM PDT Issue I want to be able to : - concatenate all files in a directory (regular and hidden),
- but I would also like to display the title of each file at the beginning of each concatenation.
I found some solutions on the web, where I can do #2 with tail -n +1 * 2>/dev/null super neat trick, but it doesn't include hidden files like if I were to do: cat * .* 2>/dev/null or even head * .* 2>/dev/null The cat command will do the trick but doesn't include the filename, and the head command will not print/concatenate the whole contents of each file. Question Is there a way to do what I need to do with tail , if not what is a good substitute to achieve the same result/output. Update with an example The tail command when attempting to concatenate all files (regular and hidden) [kevin@PC-Fedora tmp]$ ls -la total 8 drwx------ 2 user user 4096 Jun 23 09:24 . drwxr-xr-x. 54 user user 4096 Jun 23 08:21 .. -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f1 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f1 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f2 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f2 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f3 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f3 -rw-rw-r-- 1 user user 0 Jun 23 09:24 .f4 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f4 -rw-rw-r-- 1 user user 0 Jun 23 09:24 f5 [user@PC-Fedora tmp]$ tail -n +1 * ==> f1 <== ==> f2 <== ==> f3 <== ==> f4 <== ==> f5 <== [user@PC-Fedora tmp]$ tail -n +1 * .* ==> f1 <== ==> f2 <== ==> f3 <== ==> f4 <== ==> f5 <== ==> . <== tail: error reading '.': Is a directory [user@PC-Fedora tmp]$ |
Boot Ubuntu ISO under qemu Posted: 23 Jun 2021 09:19 AM PDT Probably a very simple issue: I'm trying to boot the offical Ubuntu 20.04.2 ISO image under qemu. For this purpose I've first created a disk image: qemu-img create ubuntu-20.04.2.0.img 10G And then tried: qemu-system-x86_64 -enable-kvm -hda ubuntu-20.04.2.0.img -cdrom ubuntu-20.04.2.0-desktop-amd64.iso -boot d -m 512 Which results in a kernel panic because "no working init" was found. I've encountered this issue before when trying to boot an image for the wrong architecture but I don't see what the problem is here. |
Pip3 doesn't work on my kali linux Posted: 23 Jun 2021 09:12 AM PDT So I'm learning how to create apk files on python, and as I have read, I need kali linux for do it. I installed python3 and pip3 on kali linux, but when I want to install de library that I need for make my application. It throws me this error, I don't know what's wrong and I don't find any solutions. Could you help me? This is the error: Defaulting to user installation because normal site-packages is not writeable WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/kivy/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/kivy/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/kivy/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/kivy/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/kivy/ Could not fetch URL https://pypi.org/simple/kivy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/kivy/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement kivy (from versions: none) ERROR: No matching distribution found for kivy WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping |
How to add a new network device on CentOS 7? Posted: 23 Jun 2021 09:03 AM PDT My VPS is assigned two IPs but "ip a" shows only one real network interface eth0 and one loopback interface lo. I copy /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth1 and edit it to reflect the new ip. But "systemctl restart network" fails with the error "Bringing up interface eth1: Error: Connection activation failed: No suitable device found for this connection." So how can I add a new device for setting up eth1? Or, I cannot add a new device without adding another real network adapter? Then, can I assign the two IPs to the same interface eth0? How? |
Weird output when redirecting bash prompt to a file Posted: 23 Jun 2021 09:10 AM PDT I redirected standard error of bash command to a file and bash prompt got redirected. But when i print the content of file, it was empty. Where did the bash prompt go? and again when i redirect stdout of bash to a file, it redirected the output and and not the prompt as expected but while printing the content of file, there were some characters form the prompt too. How? value of $PS1 and $PROMPT_COMMAND: please explain this to me. |
Why safeeyes is using internet? Posted: 23 Jun 2021 08:51 AM PDT Safeeyes is software provide reminders to take a break. Why does it need to access network? safeeyes was seen in netstat using the network. |
Linux / iptables - NATing between overlapping subnets Posted: 23 Jun 2021 08:50 AM PDT I'm trying to NAT from one Wireguard interface to another while retaining a private subnet. Here's my setup. The trick here is that I have overlapping subnets on the interfaces. My VPN server has three interfaces. Public facing ens3 , personal Wireguard wg0 (172.16.1.1/24), and VPN service Wireguard wg1 (172.28.112.173/12) ens3 is a public interface with a routable public ipv4 address that has access to the internet wg0 is a Wireguard interface for all of my devices on the 172.16.1.0/24 network. All my devices are connected to this interface and can communicate with each other, and are currently NATd out to use ens3 . wg1 is a Wireguard tunnel to a VPN provider on the 172.16.0.0/12 network. The VPN provider assigns an IP address and the router this interface that goes out to the public internet is on 172.16.0.1. When I set iptables to NAT to ens3 , everything works as expected. Devices on 172.16.1.0/24 can communicate with each other and also get out to the internet using ens3 's public interface When I set iptables to NAT to wg1 , devices cannot get out to the internet using wg1 's public interface. In fact, tcpdump shows the box is forwarding the packets coming from wg0 to the router on ens3 which obviously doesn't work. My end goal is to get clients wg0 to NAT out to wg1 . The subnets that are on wg0 and wg1 can't be reassigned or changed. I have a feeling this can be done with policy based routing but after spending a few days on this and trying many different configurations I still can't get it to work. I would greatly appreciate it if someone could offer me some insight! iptables dump: # Generated by xtables-save v1.8.2 on Sat Jun 19 03:43:54 2021 *nat :PREROUTING ACCEPT [15677:1866069] :INPUT ACCEPT [10279:816931] :POSTROUTING ACCEPT [5314:492688] :OUTPUT ACCEPT [4806:467393] -A PREROUTING -i ens3 -p udp -m udp --dport 1701 -m comment --comment "Also allow 1701/udp for WireGuard" -j REDIRECT --to-ports 500 -A PREROUTING -i ens3 -p udp -m udp --dport 443 -m comment --comment "Also allow 443/udp for WireGuard" -j REDIRECT --to-ports 500 -A POSTROUTING -o ens3 -j MASQUERADE COMMIT # Completed on Sat Jun 19 03:43:54 2021 # Generated by xtables-save v1.8.2 on Sat Jun 19 03:43:54 2021 *filter :INPUT DROP [2775:810347] :FORWARD ACCEPT [1048627:1182856653] :OUTPUT ACCEPT [14961117:11097488012] -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment "Allow ALL RELATED, ESTABLISHED" -j ACCEPT -A INPUT -i lo -m state --state NEW -m comment --comment "Allow ALL from lo" -j ACCEPT -A INPUT -p icmp -m state --state NEW -m comment --comment "Allow ICMP" -j ACCEPT -A INPUT -p tcp -m state --state NEW -m multiport --dports 22 -m comment --comment "Allow SSH" -j ACCEPT -A INPUT -i wg0 -m state --state NEW -m comment --comment "Allow ALL from wg0" -j ACCEPT -A INPUT -p udp -m state --state NEW -m multiport --dports 500 -m comment --comment "Allow WireGuard" -j ACCEPT -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -m comment --comment "Clamp MSS to PMTU" -j TCPMSS --clamp-mss-to-pmtu COMMIT # Completed on Sat Jun 19 03:43:54 2021 routing table dump: default via 161.129.xxx.xxx dev ens3 161.129.xxx.xxx/24 dev ens3 proto kernel scope link src 161.129.xxx.xxx 172.16.1.0/24 dev wg0 proto kernel scope link src 172.16.1.1 |
Count specific characters before substring in its line Posted: 23 Jun 2021 08:30 AM PDT I have a text file that looks like this: anything zero,one,two,three anything ...and this script: echo $(sed '2q;d' file) read INPUT The script should then find $INPUT in the second line and count the commas before it (only those that are in the same line). How do I do that? Please use sed for your solution if possible. |
OpenVPN3: Error calling StartServiceByName... Permission Denied... as root? Posted: 23 Jun 2021 08:20 AM PDT I cannot figure out how it's possible that I'm getting "permission denied" as root. Something might be missing execute permission, but I cannot figure out from this error message what that might be. In any event, I followed the installation instructions exactly as written so I'm at a loss. Any suggestions on where to begin troubleshooting this? root@bigboi:/home/andrew# apt install openvpn3 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: openvpn3 0 upgraded, 1 newly installed, 0 to remove and 114 not upgraded. Need to get 1,447 kB of archives. After this operation, 6,779 kB of additional disk space will be used. Get:1 https://swupdate.openvpn.net/community/openvpn3/repos focal/main amd64 openvpn3 amd64 13~beta-1+focal [1,447 kB] Fetched 1,447 kB in 19s (77.6 kB/s) Selecting previously unselected package openvpn3. (Reading database ... 170788 files and directories currently installed.) Preparing to unpack .../openvpn3_13~beta-1+focal_amd64.deb ... Unpacking openvpn3 (13~beta-1+focal) ... Setting up openvpn3 (13~beta-1+focal) ... openvpn3-autoload.service is a disabled or a static unit, not starting it. Processing triggers for dbus (1.12.16-2ubuntu2.1) ... Processing triggers for man-db (2.9.1-1) ... root@bigboi:/home/andrew# openvpn3 session-start --config /home/andrew/.vpn/client.ovpn ** ERROR ** Failed preparing proxy: Error calling StartServiceByName for net.openvpn.v3.sessions: Failed to execute program net.openvpn.v3.sessions: Permission denied |
Synchronizing NTP machines with a high root-time server Posted: 23 Jun 2021 08:13 AM PDT I have a small network of machines that use systemd-timesyncd.service and need their clocks to be synchronized. They don't need to be correct, just synchronized. I have designated one of the machines as an NTP server and the rest of the machines all point to this one, but they take hours to synchronize and when they do, it's terribly inconsistent: Jun 22 18:09:16 host systemd-timesyncd[10515]: Initial synchronization to time server 10.10.1.30:123 (10.10.1.30). Jun 22 18:25:50 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 18:34:22 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 18:51:26 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 19:25:34 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 19:59:43 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 20:33:51 host systemd-timesyncd[10515]: Initial synchronization to time server 10.10.1.30:123 (10.10.1.30). Jun 22 21:07:59 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 21:42:07 host systemd-timesyncd[10515]: Server has too large root distance. Disconnecting. Jun 22 22:16:16 host systemd-timesyncd[10515]: Initial synchronization to time server 10.10.1.30:123 (10.10.1.30). How do I fix this? |
ssh says Network is unreachable although network is reachable Posted: 23 Jun 2021 08:27 AM PDT I am using ubuntu 20.04. ssh was working until I rebooted my computer. Since then I am not able to connect using ssh anymore. "ssh exemple.com" surprisingly works. but "ssh 8.8.8.8", or "ssh google.com" outputs "Network is unreachable" and telnet google.com 22 gives "Network is unreachable" although I can navigate on my browser. /etc/ssh/ssh_config contains "Port 22" I already tried all approaches I could found on the internet. My guess is that ssh request is blocked before reaching the network, but I don't know how to address this problem. Thanks Edit : ssh ubuntu@18.198.187.192 -vvv would ask for password, but now shows: OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020 debug1: Reading configuration data /home/antoine/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug3: /etc/ssh/ssh_config line 19: Including file /etc/ssh/ssh_config.d/ssh_ant.conf depth 0 debug1: Reading configuration data /etc/ssh/ssh_config.d/ssh_ant.conf debug1: /etc/ssh/ssh_config line 21: Applying options for * debug2: resolve_canonicalize: hostname 18.198.187.192 is address debug2: ssh_connect_direct debug1: Connecting to 18.198.187.192 [18.198.187.192] port 22. debug1: connect to address 18.198.187.192 port 22: Connection timed out ssh: connect to host 18.198.187.192 port 22: Connection timed out |
How to select only info start with pattern in a column and print in another one Posted: 23 Jun 2021 08:20 AM PDT I have a data file A.tsv (field separator = \t ) : id clade mutation 243 40A S:ojo,L:juju,S:lili 254 267 40B J:jijy,S:asel,M:ase And I want to print in another column (in a new file B.tsv ) only mutation that start with S: , like this : id clade mutation S_mutation 243 40A S:ojo,L:juju,S:lili S:ojo,S:lili 254 267 40B J:jijy,S:asel,M:ase S:asel I tried some command with awk with no result : awk -F '\t' 'BEGIN { OFS = FS } NR==1 {$(NF+1)="S_Mutation"} ; NR != 1 { $4 = ($3==^[Ss] ? $4 ) }; 1' A.tsv > B.tsv Do you have an idea how to do that ? Thanks |
In a user namespace as non-root, on a nosuid,nodev filesystem, why does a bind mount succeed but a remount fails? Posted: 23 Jun 2021 09:53 AM PDT In a Linux user namespace, as non-root, I bind mount /tmp/foo to itself. This succeeds. Then, I try to remount /tmp/foo to be read-only. If /tmp is mounted with nosuid or nodev , then the remount fails. Otherwise, the remount succeeds. Is there some reason why nosuid and/or nodev prevent the remount from succeeding? Is this behavior documented somewhere? I'm puzzled, as I would expect the bind mount and remount to either both succeed, or both fail. Here is the code to reproduce the bind mount and remount: #define _GNU_SOURCE /* unshare */ #include <errno.h> /* errno */ #include <sched.h> /* unshare */ #include <stdio.h> /* printf */ #include <string.h> /* strerror */ #include <sys/mount.h> /* mount */ #include <unistd.h> /* getuid */ int main() { printf ( "getuid %d\n", getuid() ); int rv = unshare ( CLONE_NEWNS | CLONE_NEWPID | CLONE_NEWUSER ); printf ( "unshare %2d %s\n", rv, strerror(errno) ); rv = mount ( "/tmp/foo", "/tmp/foo", 0, MS_BIND | MS_REC, 0 ), printf ( "mount %2d %s\n", rv, strerror(errno) ); rv = mount ( "/tmp/foo", "/tmp/foo", 0, MS_BIND | MS_REMOUNT | MS_RDONLY, 0 ), printf ( "remount %2d %s\n", rv, strerror(errno) ); return 0; } Sample output: $ mkdir -p /tmp/foo $ mount | grep /tmp tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime,inode64) $ gcc test.c && ./a.out getuid 1000 unshare 0 No error information mount 0 No error information remount -1 Operation not permitted $ uname -a Linux hostname 5.12.12_1 #1 SMP 1624132767 x86_64 GNU/Linux Whereas, if /tmp is mounted with neither nosuid nor nodev , then the bind mount and the remount will both succeed, as follows: $ mkdir -p /tmp/foo $ mount | grep /tmp tmpfs on /tmp type tmpfs (rw,relatime,inode64) $ gcc test.c && ./a.out getuid 1000 unshare 0 No error information mount 0 No error information remount 0 No error information |
Unzip a .zip which includes directories with spaces in Name Posted: 23 Jun 2021 08:52 AM PDT I have a problem with unzipping my transfered .zip file. Firstly, I zipped a owncloud data directory with the encryption parameter. Then I transfered it to my new Server und would like to unzip it there. Now I have the problem that I have directories in my .zip file, which have spaces in the directory name like Jan Tester . On my old Server was the direcory displayed without quotes in the directory name. After I unziped the file, every folder name with a space in name was surrounded with quotes. For example 'Jan Tester' I would be very greatefull for any kind of help, so that I could use my old file structure on my new Server. |
How to return the sum of returned values in Linux? Posted: 23 Jun 2021 09:23 AM PDT How do I return the sum of the "Update time" and after it says "Gups:" to find the total values of each? I appreciate any help! Code: root@:~/gups# mpirun --allow-run-as-root -np 2 gups_vanilla 20 1000 1024 Number of procs: 1 Vector size: 1048576 Max datums during comm: 0 Max datums after comm: 1024 Excess datums (frac): 0 (0) Bad locality count: 0 Update time (secs): 0.003 Gups: 0.301295 Number of procs: 1 Vector size: 1048576 Max datums during comm: 0 Max datums after comm: 1024 Excess datums (frac): 0 (0) Bad locality count: 0 Update time (secs): 0.004 Gups: 0.233969 root@:~/gups# mpirun --allow-run-as-root -np 2 gups_vanilla 20 1000 1024 | awk -F: '$1 == "Gups"{sum+=$2}END{print sum}' 0.429367 |
How to filter data from multiple semicolon separated columns Posted: 23 Jun 2021 08:49 AM PDT I have a tab delimited file with 3 columns that include semicolon separated data. I want to filter values in each column as such (meet all 3 criteria across the 3 columns): first column (<-0.5), second column (>1), third column (>2). The real data has multiple columns. Input -0.6;0.14;-0.56;0.2 10.4;NA;5.1;2 3;1;4;3 -0.9;-0.16;-1.1 2.4;0.1;0.9 10;1;3 Desired output -0.6;-0.56 10.4;5.1 3;4 -0.9 2.4 10 For each row, the number of values in each column should be the same before and after filtering. |
Can I migrate ubuntu journald logs? Posted: 23 Jun 2021 09:35 AM PDT I have an app registered as a service in Ubuntu 16.04 If I type: journalctl -u myapp.service I can see the logs for my app. I am moving my app to a new VM where the same service will be in place. Is it possible to migrate the log files to the new VM so that if I type journalctl -u myapp.service it will show all my old logs and any new logs seamlessly? I have tried to swap in the contents of the old /var/log/journal directory into the new VM, and restarting the systemd-journald service, but it doesn't seem to work. More Details: - The logs are stored in /var/log/journal/< machine-id >/
- the directory contents look like this:
$ ls /var/log/journal/05b6b1e76c6040cc99b4d34977a98eca/ system@0005b3c2d13e0c14-f5dafb4f81b14546.journal~ system@0005bccfbf5ba0a3-489b00efc9586207.journal~ system@0005bd32e69bb664-572fb5c4ae0871b3.journal~ system@0005c4ce02fb0068-2e040749ddd4a06f.journal~ system@9b08b416ae4c47a78c24b4ed77c39ea2-0000000000000001-0005b3c2d1380bae.journal system@9b08b416ae4c47a78c24b4ed77c39ea2-0000000000000248-0005bccbf3f7d7c1.journal system@e5c655526bb54aa886764039cd37f897-0000000000000001-0005c4ce02f66caf.journal system.journal user-1000@0005b3c2d1bad476-9ffb04f7ed320462.journal~ user-1000@0005bccdb3bf35c7-f73e5bfc47f269c9.journal~ user-1000@0005bd32e724853a-f40e7f8131217c05.journal~ user-1000@0005c4cf22d203b5-66cc27013c97695d.journal~ user-1000@7b4df282ccfe4816a30db088f2621493-00000000000000ab-0005b3c2d1be9c3b.journal user-1000.journal - There are no files in /run/log/
- both old and new VMs share the same service, service-running user, and machine-id (this is because ultimately both VMs stem from (were copied from) the same VM just at different times and with minor software/configuration settings)
- the user I access the logs with 'ubuntu' has the same groups in both VMs:
$ groups ubuntu adm dialout cdrom floppy sudo audio dip video plugdev netdev lxd - journal version on both VMs is:
$ journalctl --version systemd 229 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN - I am not looking to hold the logs longer than what is configured in the journal settings
Whats Working - replacing the contents of the journal folder entirely
- adjusting file ownership to root:systemd-journal as they were originally
- 'root' can now see the full log
- 'ubuntu' can only see some 'current' logs, which are confusing, as they are not what was copied over
|
How to make Vim as MANPAGER hide line numbers by default? Posted: 23 Jun 2021 09:28 AM PDT I use Vim as Manpager as follows: export MANPAGER='vim -M +MANPAGER -' Works pretty well but everytime I execute man programname it shows line numbers. How can I hide the line numbers by default when using vim as man pager? Currently I'm doing it manually everytime. Note: I don't wanna hide the line numbers elsewhere, only when reading man pages. |
NFSv4 wrong effective user / owner, sec=krb5 mount squashes to anonymous user Posted: 23 Jun 2021 09:10 AM PDT I'm setting up kerberized NFSv4 for personal use - manually configured NFS, KDC
- no nameservers (using
/etc/hosts instead), no LDAP - same users on all machines (not necessarily the same id) and using id mapping for all security modes (
nfs4_disable_idmapping set to 'N') I've got two machines, both running Ubuntu 20.04 LTS arhiv.pecar (local address 192.168.56.200 ) has the NFS server and the KDC client.pecar (local address 192.158.56.100 ) is the client All plumbing seems to work and I can mount the share just fine, but if the share is exported with sec=sys server exportfs -v output /srv/export <world>(rw,async,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) client mount output arhiv.pecar:/srv/export on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=192.168.56.100,local_lock=none,addr=192.168.56.200) - root has full read / write access
- other users can read / write files if sufficient privileges are set up
nfsidmap is active, listing files on the client properly translates usernames / groups chown from client is possible, and properly translates usernames / groups Files are created under the uid/gid of the client, which means they are created with the wrong uid / gid on the server It gets mapped to the wrong owner if the server happens to have a user with the same uid, otherwise the owner is nobody:4294967294 The effective user seems to be user specified by the clients uid. I suppose this is a known drawback when using sec=sys if the share is exported with sec=krb5 server exportfs -v output /srv/export <world>(rw,async,wdelay,no_root_squash,no_subtree_check,sec=krb5p:krb5,rw,secure,no_root_squash,no_all_squash) client mount output arhiv.pecar:/srv/export on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.100,local_lock=none,addr=192.168.56.200) - all users have read access, no user (including root) has write access on files / folders owned by them
- creating files in
o+w folders will create them under the anonymous user (nobody:nogroup or anonuid:anongid if specified in exports entry) nfsidmap is active, listing files on the client properly translates usernames / groups chown from client fails with Operation not permitted. The effective user seems to be the anonymous user. I'm at a loss on what could be wrong here, so I'd appreciate the communities insight. I can provide the relevant configuration files (/etc/hosts , /etc/krb5.conf , /etc/idmapd.conf , /etc/default/nfs-common , service, kernel module list) upon request. |
Trouble connecting UART device via USB Port Posted: 23 Jun 2021 09:27 AM PDT Im trying to communicate with a couple of UART devices via USB. A HT-06 bluetooth module and a GY-NEO6MV2 GPS module. I am using a Prolific PL2303 USB cable. As a backup I also have a Silicon Labs CP2102. When I connect the PL2303 - a
lsusb command returns Bus 001 Device 015: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port - and a
dmesg command returns [147697.657037] usb 1-11: pl2303 converter now attached to ttyUSB0 - a
ls -l of /dev shows crw-rw---- 1 root dialout 188, 0 Jun 15 08:58 ttyUSB0 and I've added myself to the dialout group as well as setting chmod to 666 . I then use Putty with a serial connection with Port /dev/ttyUSB0 , Baud 9600 and Parity 8,1,None. I connect the PL2303 cable to the HT-06 as GND-GND, VCC-VCC, TX-RX and RX-TX. All pretty basic stuff. The Putty screen starts with a cursor in the top left corner. I send an AT command. Im expecting OK but nothing happens. I have a second HT-06, but still nothing. I thought it might be a broken RX or TX Cable (I get a flashing LED on the HT-06 so VCC and GND are OK) so I swapped out the PL2303 for the CP2102. Both lsusb and dmesg tell me the converter is connected (again at /dev/ttyUSB0 ). Using the same Putty settings I still get nothing. Along similar lines Ive connected the NEO6M with both the PL2303 and the CP2102, and use xgps (a subset of gpsd ). This returns an error gpsd is not connected to /dev/ttyUSB0 and obviously nothing happens. Im using Linux Mint 20 with kernel 5.4.0-74-generic which has the drivers for both CP210X and PL230X. Ive also tried different USB ports (USB2 and USB3) Despite 2 different USB-TTL converters, 3 UART devices and several different serial terminal apps (Ive also tried minicomm and rfcomm ), nothing works. |
What's the difference between `chmod a+x` and `chmod +x`? [duplicate] Posted: 23 Jun 2021 09:45 AM PDT Found an article saying to use chmod a+x to add execute permission to a file. What is the difference between it and chmod +x ? (And is there an easy way to search about these differences in man pages? "a" is too small to meaningfully search the man pages, and reading them throughoutly would take too long). |
Use ffmpeg to split a file output by size Posted: 23 Jun 2021 09:51 AM PDT I can split an aid (or video) file by time, but how do I split it by file size? ffmpeg -i input.mp3 -ss S -to E -c copy output1.mp3 -ss S -to E -c copy output2.mp3 Which is fine if I have time codes, but if I want the output files to be split at 256MB regardless of the time length, what do I do? (What I am doing now is estimating, but that often means I have to make multiple runs at it with -ss S -to E to get files that are close to where I want in size). |
tail: error writing 'standard output': Broken pipe Posted: 23 Jun 2021 09:02 AM PDT I tried to use some scripts which use tail commands on Debian stretch but I got tail: error writing 'standard output': Broken pipe . Does Debian handle tail and pipe syntax differently? Thank you in advance, |
getting Checkpoint VPN SSL Network Extender working in the command line Posted: 23 Jun 2021 09:38 AM PDT The official Checkpoint out command line tool from CheckPoint, for setting up a SSL Network Extender VPN is not longer working from the Linux command line. It is also no longer actively supported by CheckPoint. However, there is a promising project, that tries to replicate the Java applet for authentication, that talks with the snx command line utility, called snxconnect . I was trying to put snxconnect text utility to work in Debian Buster, doing: sudo pip install snxvpn and export PYTHONHTTPSVERIFY=0 snxconnect -H checkpoint.hostname -U USER However, it was mostly dying either with an HTTP error of: HTTP/1.1 301 Moved Permanently: or: Got HTTP response: HTTP/1.1 302 Found or: Unexpected response, try again. What to do about it? PS. The EndPoint Security VPN official client is working well both in a Mac High Sierra and Windows 10 Pro. |
dh_install not finding files that clearly exist Posted: 23 Jun 2021 09:01 AM PDT Running debuild -us -uc to build a package I'm working on, dh_install complains about missing files. Running it on it's own, it prints the same error messages: $ dh_install /home/felix/work/my_app/debian/install: 1: /home/felix/work/my_app/debian/install: execute.py: not found /home/felix/work/my_app/debian/install: 2: /home/felix/work/my_app/debian/install: module1: not found Though I'm in the correct directory, and the files are clearly there: $ pwd /home/felix/work/my_app $ ll total 56K [...] -rwxrwxr-x 1 felix felix 20K Dez 6 10:35 execute.py [...] drwxrwxr-x 4 felix felix 4,0K Dez 1 19:10 module1 [...] And here's my debian/install : execute.py usr/lib/my-cool-app module1 usr/lib/my-cool-app What am I doing wrong? This worked a day ago, and I changed nothing in this directory since then: $ git status On branch debian_package nothing to commit, working directory clean Additional info: $ dpkg -s debhelper | grep Version Version: 9.20131227ubuntu1 $ cat debian/compat 9 |
How to export and migrate NetworkManager settings to new system? Posted: 23 Jun 2021 08:23 AM PDT How to export and migrate NetworkManager settings to new system? Use cases are: - reinstalling a machine
- moving network configuration from laptop to desktop system (or vice-versa)
All settings should be migrated, that includes: - default and custom network connections
- wifi connections with passwords
- VLAN configurations
- VPN configurations (with keys if possible)
I checked on Arch wiki and it there is nothing on migration, so I'm asking you guys and gals here. |
Usb serial cable (prolific) not working with ftdi_sio driver on rpi-buildroot image Posted: 23 Jun 2021 09:21 AM PDT I'm using a raspberry pi to control a LED matrix display with a Prolific USB to serial cable (067b 2303). Using the default raspian image it works perfectly, however with a custom buildroot image (using rpi-buildroot) I'm unable to configure or use it. Here are my steps thus far: Linux recognizes the device but does not automatically load any drivers or attach it to /dev/ttyUSBx, so I use: modprobe ftdi_sio Which yields: usbcore: registered new interface driver usbserial usbcore: registered new interface driver usbserial_generic usbserial: USB Serial support registered for generic usbcore: registered new interface driver ftdi_sio usbserial: USB Serial support registered for FTDI USB Serial Device I don't see anything at /dev/ttyUSB* so I echo to new_id with prod and vendor IDs: echo 067b 2303 > /sys/bus/usb-serial/drivers/ftdi_sio/new_id Which yields: ftdi_sio 1-1.2:1.0: FTDI USB Serial Device converter detected usb 1-1.2: Detected FIBU232AM ftdi_sio ttyUSBO: Unable to read latency timer: -32 ftdi_sio ttyUSBO: Unable to write latency timer: -32 usb 1-1.2: FTDI USB Serial Device converter now attached to ttyUSB0 When I try to change baud rate with: stty -F /dev/ttyUSB0 115200 Which fails with: ftdi_sio ttyUSBO: ftdi_set_termios FAILED to set databits/stopbits/parity ftdi_sio ttyUSBO: ftdi_set_termios urb failed to set baudrate ftdi_sio ttyUSBO: urb failed to clear flow control ftdi_sio ttyUSBO: failed to get modem status: -32 ftdi_sio ttyUSBO: ftdi_set_termios urb failed to set baudrate ftdi_sio ttyUSBO: urb failed to clear flow control ftdi_sio ttyUSBO: failed to get modem status: -32 ftdi_sio ttyUSBO: error from flowcontrol urb I found nothing for usb serial to configure. |
How to use ssh over http or https? Posted: 23 Jun 2021 09:33 AM PDT I have a linux fedora21 client laptop behind a corporate firewall (which lets through http and https ports but not ssh 22) and I have a linux fedora21 server at home behind my own router. Browsing with https works when I specify my home server's public IP address (because I configured my home router) Is it possible to ssh (remote shell) to my home server over the http/s port? I saw a tool called corkscrew . would that help? opensshd and httpd run on the home server. What else would need configuration? |
Removing a specific line from a file Posted: 23 Jun 2021 09:22 AM PDT Somewhere in the middle of my CSV file is this line: Products below this line are out of stockNumber, month, year, reference, store Note:Number, month, year, reference and store are the CSV fields. How do I delete this line from the file using a command line command? Note the CSV is like this Number, month, year, reference, store 1,1,2014,13322,main 2,2,2014,13322,main 3,3,2011,1322,main 4,4,2012,3322,main 5,4,2013,122,secondary Products below this line are out of stockNumber, month, year, reference, store 12,411,2010,122,Albany 25,41,2009,122,Dallas 35,24,2008,122,New |
Add thousands separator in a number Posted: 23 Jun 2021 09:01 AM PDT In python re.sub(r"(?<=.)(?=(?:...)+$)", ",", stroke ) To split a number by triplets, e.g.: echo 123456789 | python -c 'import sys;import re; print re.sub(r"(?<=.)(?=(?:...)+$)", ",", sys.stdin.read());' 123,456,789 How to do the same with bash/awk? |
No comments:
Post a Comment