Shell Script to Toggle VPN On and Off Posted: 28 Oct 2021 10:54 AM PDT I am currently toggling myhotspotshield on Ubuntu 20.04.3 using the terminal; hotspotshield connect US to connect (to a US server) and hotspotshield disconnect to disconnect. I would like to map this functionality to a single key in order to toggle the VPN on and off. I would like to use hotspotshield status , which returns, Client is running : no VPN connection state : disconnected if the client isn't running to evaluate whether to run the connect or disconnect commands. I planned on doing this by passing the output of hotspotshield status as a string and searching for "no" as that string does not appear in the output if the client isn't running. However I am having trouble interpreting the output. Here is my script so far (note that I have never tried anything like this in bash): #!/bin/bash status=$(hotspotshield status) if [[status =~ "no"]]; then hotspotshield connect US else; then hotspotshield disconnect Any pointers would be appreciated! |
Incremental network file system that would work like "Post-copy memory migration" Posted: 28 Oct 2021 10:34 AM PDT When booting an embedded system with initramfs I have the option to use the network to populate the rest of the root filesystem. I wonder if any current filesystem (or fuse) implementation would exists that would support the the case where I would want to continue to boot as soon as possible, improving on the pre-copy solution I currently have. Post-copy memory migration works by copying as few pages possible, onlining only required pages (by trapping page faults) and transfer the remainder of the pages until every page is transferred. I think it would be similar to Stackable file system that works like a cache but for similar reasons I would like to have it file based and not block based. |
sed: -e expression #1, char 73: unterminated `s' command Posted: 28 Oct 2021 10:52 AM PDT I'm very new to SH and development and I'm blundering my way through the code one error at a time. I've come across this error now that I can not get to the bottom of. Any help would be great! I am running a sh script in GitHub Actions and am receiving this error: sed: -e expression #1, char 73: unterminated `s' command Here's my code: SPECTESTSFILE='manifest/specifictests.xml' cat $SPECTESTSFILE BUILDXML_TEMPLATE_FILE='buildFiles/buildrunspecifictests.template.xml' SPECTESTS="$(cat $SPECTESTSFILE)" echo "Specified Tests: " echo $SPECTESTS sed -i "s|<runTest><\/runTest>|$SPECTESTS|g" $BUILDXML_TEMPLATE_FILE In my manifest/specifictests.xml file, I have a list of RunTests with new lines <runTest>...</runTest> <runTest>....</runTest> and I want to insert this list into the buildFiles/buildrunspecifictests.template.xml file <deploy> <runTest></runTest> </deploy> Eventually, I want the file to look like this: <deploy> <runTest>...</runTest> <runTest>....</runTest> </deploy> What I've found is that if there are no newlines in the manifest/specifictests.xml file then it works but this doesn't allow me much freedom for the user's using it or eventually automating the creation of this file elsewhere. Does anyone know a fix? |
ssh command -N option explanations [duplicate] Posted: 28 Oct 2021 10:07 AM PDT In the following command : ssh -L 2424:localhost:5212 myuser@mywebsite.com -N -p 8945 What -N mean/role ? |
Zentyal upgrade hangs on setting up zentyal-network (6.1.1) Posted: 28 Oct 2021 09:54 AM PDT I have an older Zentyal installation that hangs on an update. After running apt upgrade it tries to configure zentyal-network. But this hangs infinitely long. After aborting and restarting new upgrade is not possible so I'm running dpkg --configure -a but also this command never ends. The dpkg log says nothing than status half-configured... but no error or something else. |
How to completely remove docker from Oracle Linux 7 Posted: 28 Oct 2021 10:37 AM PDT I installed docker on Oracle Linux 7 with sudo yum install docker and thought I removed it with sudo yum remove docker but/usr/bin/docker is still there: $ docker --version Docker version 19.03.11-ol, build 9bb540d There is a similar question for CentOS, but trying the answers there did not remove anything, see below. $ sudo yum remove docker \ > docker-client \ > docker-client-latest \ > docker-common \ > docker-latest \ > docker-latest-logrotate \ > docker-logrotate \ > docker-engine Loaded plugins: langpacks, ulninfo No Match for argument: docker No Match for argument: docker-client No Match for argument: docker-client-latest No Match for argument: docker-common No Match for argument: docker-latest No Match for argument: docker-latest-logrotate No Match for argument: docker-logrotate No Match for argument: docker-engine No Packages marked for removal And even the documented solution: $ sudo yum remove docker-ce docker-ce-cli containerd.io Loaded plugins: langpacks, ulninfo No Match for argument: docker-ce No Match for argument: docker-ce-cli No Match for argument: containerd.io No Packages marked for removal How do I purge it from my system completely? Thank you! |
How to keep the bash script alive in python script Posted: 28 Oct 2021 09:51 AM PDT I have a bash script that loop to call ss -i (print ss -i continuously) I have a python script running mininet with two hosts h1 h2 and I need ss -i for h1 only. So basicly in python script I need to call, h1.cmd('bash.sh') But the session always end quickly, here is some of the code info("*** Starting network\n") net.build() c1.start() s1.start( [c1] ) s2.start( [c1] ) s3.start( [c1] ) s4.start( [c1] ) h1.cmd('bash.sh &') h2.cmd('iperf -s -i 0.1 > iperfServer.txt &') h1.cmd('iperf -c 10.0.0.2 -i 0.1 -t 20 > iperfClient.txt & ') time.sleep(21) info("*** Running CLI\n") CLI( net ) info("*** Stopping network\n") net.stop() |
connect: Network unreachable - Ubuntu 18.04 Posted: 28 Oct 2021 09:35 AM PDT sorry for the possibly dumb question but I just can't find a solution specific to my issue. I'm a linux newbie trying to manage a VPS server and I'm in trouble. Could someone PLEASE help me? I'm running some docker containers but I'm not able to use internet anywhere. I'm also using netplan instead of ifupdown ifconfig results: br-0c7683224a70: flags=4163<UP, BROADCAST, RUNNING, MULTICAST› mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fe80::42:60ff :fee3:3564 pref ixlen 64 scope id 0x20<1 ink> ether 02:42:60:e3:35:64 txqueuelen 0 (Ethernet) RX packets 34 bytes 2016 (2.0 KB) R errors 0 dropped 0 overruns 0 frame 0 TX packets 51 bytes 4534 (4.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP, BROADCAST, RUNNING, MULTICAST› mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:e2ff :fe3a:6cd2 pref ixlen 64 scope id 0x20<1 ink> ether 02:42:e2:3a:6c:d2 txqueuelen 0 (Ethernet) R packets 20 bytes 1600 (1.6 KB) R errors 0 dropped 0 overruns 0 frame 0 TX packets 35 bytes 3384 (3.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp6s0: flags=4163<UP, BROADCAST, RUNNING, MULTICAST› mtu 1450 inet6 fe80::5800:2ff :fe9a:fdb3 prefixlen 64 scopeid 0x20<1 ink> ether 5a :00:02:9a :fd:b3 txqueuelen 1000 (Ethernet) RX packets 4 bytes 520 (520.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 37 bytes 7082 (7.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP , LOOPBACK, RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 pref ixlen 128 scoped 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 15154 bytes 1578872 (1.5 MB) R errors 0 dropped overruns 0 frame 0 TX packets 15154 bytes 1578872 (1.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth14db567: flags=4163<UP , BROADCAST, RUNNING, MULTICAST› mtu 1500 inet6 fe80::d494:1bff :feaa:943b pref ixlen 64 scope id 0x20<1 ink› ether d6:94:1b: aa:94:3b txqueuelen 0 (Ethernet) R packets 10 bytes 940 (940.0 B) R errors 0 dropped 0 overruns 0 frame 0 TX packets 26 bytes 2355 (2.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth375de13: flags=4163<UP, BROADCAST, RUNNING, MULTICAST› mtu 1500 inet6 fe80: : b853:27ff :fe5c: 413 pref ixlen 64 scopeid 0x20<1 ink› ether ba:53:27:5c:04:13 txqueuelen 0 (Ethernet) RX packets 25 bytes 15478 (15.4 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 52 bytes 4831 (4.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethc49e94d: flags=4163<UP , BROADCAST, RUNNING, MULTICAST> mtu 1500 inet6 fe80::e8ad:b6ff: fead:10f9 pref ixlen 64 scopeid 0x20<1 ink> ether ea:ad: b6: ad:10:f9 txqueuelen 0 (Ethernet) R packets 49 bytes 4597 (4.5 KB) R errors 0 dropped 0 overruns 0 frame 0 TX packets 56 bytes 18118 (18.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethfOeef78: flags=4163<UP , BROADCAST, RUNNING, MULTICAST> mtu 1500 inet6 fe80::f020:51ff: fe8a: 7e2c pref ixlen 64 scopeid 0x20<1 ink› ether f2:20:51:8a:7e:2c txqueuelen 0 (Ethernet) R packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18 bytes 1300 (1.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Results of route : Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-Oc7683224a70 I have literally NO idea what is happening 😞 |
ip rule not respecting packet generation how to fix? Posted: 28 Oct 2021 09:04 AM PDT Problem: ip rule built to route L4 traffic out a specific interface are not respected when packets are generated with different source address. Overview I want to generate packets with a different source address than the host's address's. To accomplish this I am using python's package Scapy. Note: my goal is to send to send DNS traffic, however I was not able to find a simple solution that let me spoof the source address in DNS requests, so I am just generating a UDP packet with src and dst address at port 53, believe this still works as I am only testing L3 and L4, not the actual DNS protocol at the moment. Here is my script #!/usr/bin/python3 # The following is designed to generate a packet with a different source address import sys from scapy.all import * def main(): S = "10.0.26.122" # spoofed source IP address D = "10.0.26.123" # destination IP address SP = 53 # source port DP = 53 # destination port payload = "This is a fake message" # packet payload spoofed_packet = IP(src=S, dst=D) / UDP(sport=53, dport=53) / payload send(spoofed_packet) #Entry point main() Before running the script, here is what my route table looks like: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.104.8.1 0.0.0.0 UG 101 0 0 ens192 10.0.21.0 0.0.0.0 255.255.255.0 U 104 0 0 ens256 10.0.26.0 0.0.0.0 255.255.255.0 U 0 0 0 ens224 10.0.27.0 0.0.0.0 255.255.255.0 U 102 0 0 ens193 10.0.28.0 10.0.29.1 255.255.255.0 UG 100 0 0 ens161 10.0.29.0 0.0.0.0 255.255.255.0 U 100 0 0 ens161 10.104.8.0 0.0.0.0 255.255.255.0 U 101 0 0 ens192 10.212.134.0 10.104.8.1 255.255.255.0 UG 101 0 0 ens192 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Here are the ip interfaces # ip -br a lo UNKNOWN 127.0.0.1/8 ens161 UP 10.0.29.122/24 ens192 UP 10.104.8.122/24 ens193 UP 10.0.27.122/24 ens224 UP 10.0.26.122/24 ens256 UP 10.0.21.122/24 virbr0 DOWN 192.168.122.1/24 virbr0-nic DOWN ip_vti0@NONE DOWN When I run the script with ./packet-gen.py "10.0.26.122" "10.0.26.123" it works. This is because I have not yet built my ip rule / separate routing table. I perform a tcpdump at the host (10.0.26.122) and on the far end host (10.0.26.123), and I see the UDP packet being sent. I also tested with dig www.google.com @10.0.26.123 and see an actual DNS request being performed and get a response. Now the problem. I want to remove the route entry in the main table, then only route based on the port number. To do this I run the following to first remove the route entry to 10.0.26.0/24. # ip route del 10.0.26.0/24 dev ens224 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.104.8.1 0.0.0.0 UG 101 0 0 ens192 10.0.21.0 0.0.0.0 255.255.255.0 U 104 0 0 ens256 10.0.27.0 0.0.0.0 255.255.255.0 U 102 0 0 ens193 10.0.28.0 10.0.29.1 255.255.255.0 UG 100 0 0 ens161 10.0.29.0 0.0.0.0 255.255.255.0 U 100 0 0 ens161 10.104.8.0 0.0.0.0 255.255.255.0 U 101 0 0 ens192 10.212.134.0 10.104.8.1 255.255.255.0 UG 101 0 0 ens192 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 The entry is removed. If I run my script again it does not work. The dig request also fails. This is expected as there is no L3 route in the main kernel routing table. To route on L4 I first created a new ip route table to send all traffic via ens224: # ip route add table 53 0.0.0.0/0 dev ens224 Then I create an ip rule to capture any traffic using port 53, and send out my custom table 53. # ip rule add ipproto udp dport 53 lookup 53 I also created a special sysctl rule for rp_filter too loosen strict reverse path forwarding rules # sysctl -w "net.ipv4.conf.ens224.rp_filter=2" To check my work I see the following: # ip route list table 53 default dev ens224 scope link # ip rule list 0: from all lookup local 32765: from all ipproto udp dport 53 lookup 53 32766: from all lookup main 32767: from all lookup default To test this I first try to ping 10.0.26.123. It fails which is expected. Now I try to perform a dig request dig www.google.com @10.0.26.123 , and it works. The dig request hits the ip rule before going to the main table and is routed appropriately. I see the traffic reach the service with tcpdump (10.0.26.123), and coming from my host (10.0.26.122). Now I try running my scapy script again, and nothing. Even with the same source address as the host, nothing in tcpdump on my host or the server. I tried changing the source address, no change, nothing. If I add back the main L3 route for 10.0.26.0/24 in the main table, the scapy script works again. What am I missing here? Why wont my generate traffic respect the ip rule sets I created? |
Detect all the OS packages that are needed to be installed in order to use the path Posted: 28 Oct 2021 08:59 AM PDT Given a path, I want to detect all the OS packages that needed to be installed, in order to use it. For example: > /bin/rpm -qf --queryformat "[%{NAME}]\n" /usr/bin/tcsh tcsh Sometimes it does not work. For example: > /bin/rpm -qf --queryformat "[%{NAME}]\n" /sadd/python/lib/python3.7/lib-dynload/_sqlite3.cpython-37m-x86_64-linux-gnu.so file /sadd/python/lib/python3.7/lib-dynload/_sqlite3.cpython-37m-x86_64-linux-gnu.so is not owned by any package But looking into the output of ldd I see: > ldd /sadd/python/lib/python3.7/lib-dynload/_sqlite3.cpython-37m-x86_64-linux-gnu.so linux-vdso.so.1 (0x00007f11f7ffa000) libsqlite3.so.0 => /usr/lib64/libsqlite3.so.0 (0x0000711fff7901000) libpython3.7m.so.1.0 => /usr/pkgs/python3/3.7.4/lib/libpython3.7m.so.1.0 (0x00007ffff7422000) libpthread.so.0 => /lib64/noelision/libpthread.so.0 (0x00007f11ff7205000) libc.so.6 => /lib64/libc.so.6 (0x00007f13ff6e60000) libdl.so.2 => /lib64/libdl.so.2 (0x00007ff126c5c000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff216a21000) libutil.so.1 => /lib64/libutil.so.1 (0x0000711ff681e000) libm.so.6 => /lib64/libm.so.6 (0x00007fff16521000) /lib64/ld-linux-x86-64.so.2 (0x00007fff17ddb000) I see that it has /usr/lib64/libsqlite3.so.0 . So I can do: > /bin/rpm -qf --queryformat "[%{NAME}]\n" /usr/lib64/libsqlite3.so.0 libsqlite3-0 Meaning there is a required OS package that should be installed in order to use /sadd/python/lib/python3.7/lib-dynload/_sqlite3.cpython-37m-x86_64-linux-gnu.so . Now, I can create a script which runs the above rpm command and then the ldd command on each path and iterates over that shared libs (might need to use locate because sometimes there is not path, like for linux-vdso.so.1 ). But: - It is not recommended to use
ldd . - The parsing of the
ldd output is pretty ugly. Now, I saw a related topic on the matter. I could use readelf but same issue. Is there a better solution to detect all the required OS packages for a given path? I also am using rpmdep.pl but it expects to get a package name and returns all the package dependencies. So for now my algorithm is: - Run
/bin/rpm -qf --queryformat "[%{NAME}]\n" $path and get the package name (marked with $package) . Also add it to the packages list. - Run
rpmdep.pl $package ) and add all the packages to the list. - Run
ldd $path and for each line: - if there is a path (like
libsqlite3.so.0 => /usr/lib64/libsqlite3.so.0 ) then go back to step 1 with /usr/lib64/libsqlite3.so.0 . - if there is no path (like
linux-vdso.so.1 ) then try to locate it (using locate command) and if found, then go back to step 1 with the path you got. That way I collect all of the OS packages that are required for a path. It works pretty good but I'm looking for a better/clean approach on solving this task. |
Systemd servcice failed at exec spawning permission denied Posted: 28 Oct 2021 08:47 AM PDT I am trying to setup a systemd service that executes a script to start a server on ubuntu. But I always get the error that the service can't execute the script due to permissions even though the user has full permissions on the folder or the script. I saw in many posts that selinux could be a problem, so I tried to locate the script in the /usr/local/bin folder. But that didn't work and I also checked with sestatus and selinux is disabled anyway. I also tried this line: chmod +x placeholder.sh Here is my service: [Unit] Description=Satisfactory Server Wants=network.target After=syslog.target network-online.target [Service] Type=simple Restart=on-failure RestartSec=10 User=satisfactory WorkingDirectory=/home/satisfactory/SatisfactoryDedicatedServer ExecStart=/home/satisfactory/SatisfactoryDedicatedServer/start_server.sh [Install] WantedBy=multi-user.target I tried many different locations and users but I get everywhere the permission denied error. |
How to store query multiple result in shell script variable(Array)? Posted: 28 Oct 2021 09:33 AM PDT I'm trying to do a query and store every row result in an array element in ksh (maybe bash). I do: result=($($PATH_UTI/querysh " set heading off set feedback off SELECT columnA,columnb FROM user.comunication;")) I have that: row1 = HOUSE CAR row2 = DOC CAT echo "${result[1]}" and it gives me HOUSE But I would like to get: echo "${result[1]}" gives: "HOUSE CAR" |
Connect to wi-fi AP without getting deauthenticated Posted: 28 Oct 2021 08:49 AM PDT I'm using NetworkManager to set-up everything automatically, and I perform with wpa_supplicant a wps_pbc to connect to my AP How to keep this method without being deauthenticated every 2 minutes by either by NM or WS? If theres any other method to connect with push button I take it |
multiple schedules for a single task in a k8s cronjob Posted: 28 Oct 2021 09:00 AM PDT Warning: k8s greenhorn on this side. I need to run a task that will be set up in a k8s cronjob. I need it to run every 45 minutes. Having this in the schedule does not work: 0/45 * * * * Because it would run at X:00 , then X:45 then X+1:00 instead of X+1:30 . So I might need to set up multiple schedule rules instead: 0,45 0/3 * * * 30 1/3 * * * 15 2/3 * * * I am wondering if it's possible to set up multiple schedules in a single CronJob definition or if I will have to setup multiple CronJobs so that each CronJob takes care of each line. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/cron-job-v1/ Update: I just read that it's possible to have more than a single manifest written in a single yaml file so it might work with 3 manifests.... but knowing if it's possible with a single manifest would be awesome. |
Excluding directories in find Posted: 28 Oct 2021 08:44 AM PDT find is always a complete mystery to me whenever I use it; I just want to exclude everything under /mnt (I am in bash on Ubuntu 20.04 on WSL so don't want it to search in the Windows space) from my search, but find just blunders into those directories completely ignoring me. I found syntax from this page. https://stackoverflow.com/questions/4210042/how-to-exclude-a-directory-in-find-command and tried all variations - all failed. sudo find / -name 'git-credential-manager*' -not -path '/mnt/*' sudo find / -name 'git-credential-manager*' ! -path '/mnt/*' sudo find / -name 'git-credential-manager*' ! -path '*/mnt/*' When I do this, it just blunders into /mnt and throws errors (which is really frustrating as the syntax above looks clear, and the stackoverflow page syntax seems correct): find: '/mnt/d/$RECYCLE.BIN/New folder': Permission denied find: '/mnt/d/$RECYCLE.BIN/S-1-5-18': Permission denied Can someone show me how to stop find from ignoring my directory exclusion switches? |
cannot resize disk on aws instance Posted: 28 Oct 2021 08:38 AM PDT I tried following the instructions in: Can't resize a partition using resize2fs But nothing seemed to work. The output of lsblk is: [AWS root@archive ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 300G 0 disk ├─xvda1 202:1 0 500M 0 part /boot ├─xvda2 202:2 0 29.5G 0 part │ ├─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm / │ └─vg_archive-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP] ├─xvda3 202:3 0 10G 0 part │ └─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm / └─xvda4 202:4 0 110G 0 part └─vg_archive-lv_root (dm-0) 253:0 0 147.6G 0 lvm / You can see that 300Gb is available, but I've been unable to extend the root volume from 150Gb. Any help greatly appreciated, thanks. Update: thought I'd add the linux distro, it's old which might be part of the problem... Linux version 2.6.32-358.18.1.el6.x86_64 (mockbuild@c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Aug 28 17:19:38 UTC 2013 As per the (automated?) commend, this is the output from the suggested commands from the link above: [AWS root@archive ~]$ sudo pvs PV VG Fmt Attr PSize PFree /dev/xvda2 vg_archive lvm2 a-- 29.51g 0 /dev/xvda3 vg_archive lvm2 a-- 9.99g 0 /dev/xvda4 vg_archive lvm2 a-- 110.00g 0 [AWS root@archive ~]$ sudo pvresize /dev/xvda2 Physical volume "/dev/xvda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ sudo pvresize /dev/xvda3 Physical volume "/dev/xvda3" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ sudo pvresize /dev/xvda4 Physical volume "/dev/xvda4" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 146G 131G 8.0G 95% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /boot [AWS root@archive ~]$ sudo lvextend -r -l +100%FREE /dev/mapper/vg_archive-lv_root Extending logical volume lv_root to 147.56 GiB Logical volume lv_root successfully resized resize2fs 1.41.12 (17-May-2010) The filesystem is already 38682624 blocks long. Nothing to do! [AWS root@archive ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root 146G 131G 8.0G 95% / tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 485M 80M 380M 18% /boot Update: it would appear the fs type is ext4 from the below output [AWS root@archive ~]$ df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_archive-lv_root ext4 146G 131G 8.0G 95% / tmpfs tmpfs 938M 0 938M 0% /dev/shm /dev/xvda1 ext4 485M 80M 380M 18% /boot Update: output of cfdisk /dev/xvda as requested: cfdisk (util-linux-ng 2.17.2) Disk Drive: /dev/xvda Size: 322122547200 bytes, 322.1 GB Heads: 255 Sectors per Track: 63 Cylinders: 39162 Name Flags Part Type FS Type [Label] Size (MB) --------------------------------------------------------------------------------------------------------------------------------------------- Unusable 1.05 * xvda1 Boot Primary Linux ext3 524.29 * xvda2 Primary Linux LVM 31686.92 * xvda3 Primary Linux LVM 10731.94 * xvda4 Primary Linux LVM 118115.03 Unusable 161063.34 * |
How to transfer root system volume /dev/vda1 to /dev/sda (LVM) Posted: 28 Oct 2021 08:36 AM PDT How to transfer root system volume /dev/vda1 to /dev/sda (LVM). CMD: df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.8G 0 1.8G 0% /dev tmpfs 1.9G 36K 1.9G 1% /dev/shm tmpfs 1.9G 17M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda1 80G 77G 3.4G 96% / tmpfs 934M 1.7M 933M 1% /tmp tmpfs 374M 0 374M 0% /run/user/0 CMD: lvmdiskscan /dev/sda [ 200.00 GiB] LVM physical volume /dev/vda1 [ <80.00 GiB] 0 disks 1 partition 1 LVM physical volume whole disk 0 LVM physical volumes Im trying to increase my system. I use cpanel where I have many websites and I need more storage. I know it is not possible to merge vda and sda but it is possible to increase with LVM. I use Centos 8. My 80GB volume is full and I want add more storage for my system, so I added 200GB volume in digital ocean. |
Unable to connect to Jupyter notebook server (Error 404) from Pyenv virtual environment Posted: 28 Oct 2021 08:35 AM PDT OS: 5.14.14-arch1-1 GNU/Linux x86_64 Pkgs: jupyter 4.6.3, jupyter-notebook 6.4.4, pyenv 2.2.0 ] My setup consists in launching jupyter from a pyenv virtual environment in directory /path/to/my_directory with all necessary packages and python modules pre-installed and building a custom iPython kernel for that specific environment. Making the custom iPython kernel available in the notebook session involves: $ cd /path/to/my_directory $ pyenv local 3.7.0 $ python -m pip install -r <requirements_file> $ ipython kernel install --user --name <my_kernel> --display-name "Python3.7.0 (<my_kernel>)" $ jupyter notebook ... followed by choosing "File > New Notebook" in Jupyter console menu, and picking "my_kernel" in the new notebook browser window. This method has worked flawlessly until today. Yesterday everything worked, launching the custom kernel included. Today is a different day. Changes were basically a few package upgrades, among them a number of minor updates concerning builds: [2021-10-27T08:42:39+0200] [ALPM] upgraded npm (8.1.0-1 -> 8.1.1-1) [2021-10-27T21:42:20+0200] [ALPM] upgraded c-ares (1.18.0-1 -> 1.18.1-1) [2021-10-27T21:42:20+0200] [ALPM] upgraded cmake (3.21.3-1 -> 3.21.4-1) [2021-10-27T21:42:20+0200] [ALPM] upgraded jupyter-nbconvert (6.1.0-1 -> 6.1.0-2) [2021-10-27T21:42:20+0200] [ALPM] upgraded jupyter (4.6.3-2 -> 4.6.3-3) [2021-10-27T21:42:20+0200] [ALPM] upgraded ldb (2:2.4.0-1 -> 2:2.4.1-1) [2021-10-27T21:42:20+0200] [ALPM] upgraded librsvg (2:2.52.0-1 -> 2:2.52.3-1) [2021-10-27T21:42:20+0200] [ALPM] upgraded mercurial (5.9.2-1 -> 5.9.3-1) [2021-10-27T21:42:21+0200] [ALPM] upgraded portmidi (217-9 -> 236-1) [2021-10-27T21:42:21+0200] [ALPM] upgraded pyenv (2.1.0-1 -> 2.2.0-1) [2021-10-27T21:42:21+0200] [ALPM] upgraded python-pygame (2.0.1-2 -> 2.0.1-3) [2021-10-27T21:42:21+0200] [ALPM] upgraded python-sqlalchemy (1.3.23-1 -> 1.4.25-1) [2021-10-27T21:42:21+0200] [ALPM] upgraded smbclient (4.15.0-1 -> 4.15.1-1) [2021-10-27T21:42:21+0200] [ALPM] upgraded samba (4.15.0-1 -> 4.15.1-1) [2021-10-28T08:18:33+0200] [ALPM] upgraded systemd-libs (249.5-2 -> 249.5-3) [2021-10-28T08:18:33+0200] [ALPM] upgraded device-mapper (2.03.13-1 -> 2.03.14-1) [2021-10-28T08:18:33+0200] [ALPM] upgraded cryptsetup (2.4.1-1 -> 2.4.1-3) [2021-10-28T08:18:33+0200] [ALPM] upgraded lvm2 (2.03.13-1 -> 2.03.14-1) [2021-10-28T08:18:33+0200] [ALPM] upgraded openexr (3.1.2-1 -> 3.1.3-1) [2021-10-28T08:18:33+0200] [ALPM] upgraded python-joblib (1.0.1-1 -> 1.1.0-1) [2021-10-28T08:18:33+0200] [ALPM] upgraded python-ptyprocess (0.7.0-1 -> 0.7.0-2) [2021-10-28T08:18:33+0200] [ALPM] upgraded python-pexpect (4.8.0-3 -> 4.8.0-4) [2021-10-28T08:18:33+0200] [ALPM] upgraded systemd (249.5-2 -> 249.5-3) [2021-10-28T08:18:34+0200] [ALPM] upgraded systemd-sysvcompat (249.5-2 -> 249.5-3) The error I get when trying to launch <my_kernel> is: Connection failed A connection to the notebook server could not be established. The notebook will continue trying to reconnect. Check your network connection or notebook server configuration. I document the trace when trying to launch <my_kernel>: [I 14:54:07.039 NotebookApp] Loading IPython parallel extension [I 14:54:07.040 NotebookApp] Serving notebooks from local directory: /path/to/my_directory [I 14:54:07.040 NotebookApp] Jupyter Notebook 6.3.0 is running at: [I 14:54:07.041 NotebookApp] http://localhost:8888/ [I 14:54:07.041 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). ERROR:asyncio:Exception in callback <TaskWakeupMethWrapper object at 0x7fc064ed4468>(<Future finis...igin\r\n\r\n'>) handle: <Handle <TaskWakeupMethWrapper object at 0x7fc064ed4468>(<Future finis...igin\r\n\r\n'>)> Traceback (most recent call last): File "/home/USER/.pyenv/versions/3.7.0/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) RuntimeError: Cannot enter into task <Task pending coro=<HTTP1ServerConnection._server_request_loop() running at /home/USER/.pyenv/versions/3.7.0/lib/python3.7/site-packages/tornado/http1connection.py:823> wait_for=<Future finished result=b'GET /api/co...rigin\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /home/USER/.pyenv/versions/3.7.0/lib/python3.7/site-packages/tornad/ioloop.py:688]> while another task <Task pending coro=<MappingKernelManager.start_kernel() running at /home/USER/.pyenv/versions/3.7.0/lib/python3.7/site-packages/notebook/services/kernels/kernelmanager.py:176> cb=[IOLoop.add_future.<locals>.<lambda>() at /home/USER/.pyenv/versions/3.7.0/lib/python3.7/site-packages/tornado/ioloop.py:688]> is being executed. [I 14:54:09.773 NotebookApp] Kernel started: ad69df20-22a6-40b2-9778-5e1aa8781898, name: <my_kernel> [I 14:54:12.770 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports [I 14:54:15.783 NotebookApp] KernelRestarter: restarting kernel (2/5), new random ports [I 14:54:18.785 NotebookApp] 302 GET /notebooks/01_cpu-only_inference.ipynb (127.0.0.1) 2.010000ms [I 14:54:18.795 NotebookApp] KernelRestarter: restarting kernel (3/5), new random ports [I 14:54:21.803 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports [W 14:54:24.814 NotebookApp] KernelRestarter: restart failed [W 14:54:24.815 NotebookApp] Kernel ad69df20-22a6-40b2-9778-5e1aa8781898 died, removing from map. [W 14:54:30.901 NotebookApp] Replacing stale connection: ad69df20-22a6-40b2-9778-5e1aa8781898:2eb96ae1751841d7801d200795a4bb30 [W 14:55:09.826 NotebookApp] Timeout waiting for kernel_info reply from ad69df20-22a6-40b2-9778-5e1aa8781898 [E 14:55:09.829 NotebookApp] Error opening stream: HTTP 404: Not Found (Kernel does not exist: ad69df20-22a6-40b2-9778-5e1aa8781898) [W 14:55:09.832 NotebookApp] 404 GET /api/kernels/ad69df20-22a6-40b2-9778-5e1aa8781898/channels?session_id=2eb96ae1751841d7801d200795a4bb30 (127.0.0.1): Kernel does not exist: ad69df20-22a6-40b2-9778-5e1aa8781898 [W 14:55:09.848 NotebookApp] 404 GET /api/kernels/ad69df20-22a6-40b2-9778-5e1aa8781898/channels?session_id=2eb96ae1751841d7801d200795a4bb30 (127.0.0.1) 38952.030000ms referer=None EDIT 1
rolled back pyenv , jupyter and jupyter-nbconvert , one by one and in that order. For example: $ sudo pacman -U jupyter-nbconvert-6.1.0-1-any.pkg.tar.zst rebooted each time -> no change reinstalled Python virtual environment binaries for 3.7.0 $ pyenv uninstall 3.7.0 $ pyenv install 3.7.0 reinstalled Python 3.7.0 shims $ pyenv rehash set out to reinstall my 3GB of required packages in 3.7.0 virt- env, starting with ipykernel . Result is Segmentation fault (core dumped) . $ cd /path/to/my_directory $ pyenv local 3.7.0 $ python -m pip install ipykernel Segmentation fault (core dumped) $ python -m pip install -U pip Segmentation fault (core dumped) $ ... etc... WTF ? Still troubleshooting this. Any suggestion welcome. |
check width of image before convert it Posted: 28 Oct 2021 09:48 AM PDT I use the following script to convert all jpg and png images: # absolute path to image folder FOLDER="/home/*/public_html/" # max width WIDTH=1280 # max height HEIGHT=720 #resize png or jpg to either height or width, keeps proportions using imagemagick find ${FOLDER} -type f \( -iname \*.jpg -o -iname \*.png \) -exec convert \{} -verbose -resize $WIDTHx$HEIGHT\> \{} \; but I was shocked today when I run ls -l and found all photos modified, data changed, big or not Oct 28 11:18 /home/photos/20210321/T161631305496ece25372fc18a9239da7911ac7c0dd056 (2).jpg So I'm thinking about using an if condition to check the path of the image first, then if WIDTH is greater than 1280px run convert . Else don't do anything. update 2 I built this script #!/bin/bash for i in /root/d/*.jpg; do read -r w h <<< $(identify -format "%w %h" "$i") if [ $w -gt 1280 ]; then FOLDER="$i" WIDTH=1280 HEIGHT=720 find ${FOLDER} -type f \( -iname \*.jpg -o -iname \*.png \) -exec convert \{} -verbose -resize $WIDTHx$HEIGHT\> \{} \; fi done So I see find better for . for didn't search all folder and subfolders. update 3 WIDTH=1280 HEIGHT=720 find /home/sen/tes/ -type f \( -iname \*.jpg -o -iname \*.png \) | while read img; do \ anytopnm "$img" | pamfile | \ perl -ane 'exit 1 if $F[3]>1280' || convert "$img" -verbose -resize "${WIDTH}x${HEIGHT}>" "$img"; \ done it's working good , but i get jpegtopnm: WRITING PPM FILE when no images > 1280 |
How to store '>&2' in a variable Posted: 28 Oct 2021 09:13 AM PDT Similar question, but no answer: How can we run a command stored in a variable? How to do the following in bash script? error=">&2" echo 'something went wrong' $error instead of echo 'something went wrong' >&2 Why? Because if you typo it like >2& , it works normal but writes error messages to a file called 2 . |
DNS server setup on debian Posted: 28 Oct 2021 10:34 AM PDT I'm trying to resolve my problem with the private DNS server. I'm in a state where I can successfully resolve public DNS and my local DNS setup in bind9 for my domain for example mydomain.com, but I can't resolve public record for mydomain.com. I'm not sure how to solve this problem. Thanks for any help. Edit: for better understand what exactly doesnt work. setup on my private NS1: priv1.mydomain.com - 192.168.0.10 priv2.mydomain.com - 192.168.0.10 setup on provider: pub1.mydomain.com client PC - dig test.mydomain.com @private.NS1 -> 192.168.0.10 -> thats okey client PC - dig google.com @private.NS1 -> 172.217.23.238 -> thats okey client PC - dig pub1.mydomain.com @private.NS1 -> no answer My issue is access both private and public records. |
Problem with apache2 server Posted: 28 Oct 2021 10:53 AM PDT So I am having a problem with my apache2 server sharing a file. I am on ParrotSec OS 4.11 debian. Anyhow, when I create a file alexa.exe and place it in /var/www/html directory and then start service apache2 start Then on my windows computer I go to my Parrot machines IP http://192.168.1.218/alexa.exe and it does not work. The response says cannot reach this page. Make sure you have the right webaddress. I tried it from my cellphone and it timed out. However, if I try it from my parrot machine it works. I tried turning off my firewall on my parrot machine and it still does not work. I cannot see why this should not work. Does anyone have any suggestions? It is not a fresh apache install, however I can reach the default page and the firewall is gufw a gui which I turned off. I have know problem getting apache to work on Kali and it should basically be the same on parrot im guessing. I may just try doing a fresh install. There is no reason it should not work after that, but if anyone has any other ideas I would be happy to hear. |
"will not make a filesystem here!" Posted: 28 Oct 2021 09:48 AM PDT Trying to format this LV /dev/mapper/nvmeVg-var which is not mounted. See findmnt below. mkfs.ext3 /dev/mapper/nvmeVg-var mke2fs 1.45.6 (20-Mar-2020) /dev/mapper/nvmeVg-var contains a ext4 file system last mounted on /var on Mon Oct 11 23:18:35 2021 Proceed anyway? (y,N) y /dev/mapper/nvmeVg-var is apparently in use by the system; will not make a filesystem here! [root@localhost-live snapshots]# findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/live-rw │ ext4 rw,relatime,seclabel ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direc │ └─/proc/sys/fs/binfmt_misc binfmt_misc binfmt_mis rw,nosuid,nodev,noexec,relatime ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_rec │ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/firmware/efi/efivars efivarfs efivarfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/bpf none bpf rw,nosuid,nodev,noexec,relatime,mode=700 │ ├─/sys/fs/selinux selinuxfs selinuxfs rw,nosuid,noexec,relatime │ ├─/sys/kernel/debug debugfs debugfs rw,nosuid,nodev,noexec,relatime,seclabel │ │ └─/sys/kernel/debug/tracing tracefs tracefs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/tracing tracefs tracefs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/fs/fuse/connections fusectl fusectl rw,nosuid,nodev,noexec,relatime │ └─/sys/kernel/config configfs configfs rw,nosuid,nodev,noexec,relatime ├─/dev devtmpfs devtmpfs rw,nosuid,seclabel,size=32845836k,nr_inodes=8211459,mode=755,i │ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel,inode64 │ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000 │ ├─/dev/mqueue mqueue mqueue rw,nosuid,nodev,noexec,relatime,seclabel │ └─/dev/hugepages hugetlbfs hugetlbfs rw,relatime,seclabel,pagesize=2M ├─/run tmpfs tmpfs rw,nosuid,nodev,seclabel,size=13150860k,nr_inodes=819200,mode= │ ├─/run/initramfs/live /dev/sdf1 iso9660 ro,relatime,nojoliet,check=s,map=n,blocksize=2048 │ ├─/run/media/liveuser/c90f13b9-f228-4051-a586-7b6083f50105 │ │ /dev/sdb1 ext4 rw,nosuid,nodev,relatime,seclabel │ ├─/run/media/liveuser/Anaconda /dev/mapper/live-base │ │ ext4 ro,nosuid,nodev,relatime,seclabel │ ├─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatime,seclabel,size=6575428k,nr_inodes=1643 │ │ └─/run/user/1000/gvfs gvfsd-fuse fuse.gvfsd rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 │ ├─/run/media/liveuser/d52b3913-2ed2-4142-9309-3fdf641141f0 │ │ /dev/md127 ext4 rw,nosuid,nodev,relatime,seclabel,stripe=256 │ ├─/run/media/liveuser/disk /dev/loop0 squashfs ro,nosuid,nodev,relatime,seclabel │ └─/run/media/liveuser/66a1a58a-c06f-4407-8d47-1fd4266c6b75 │ /dev/mapper/centos-root │ xfs rw,nosuid,nodev,relatime,seclabel,attr2,inode64,logbufs=8,logb ├─/var/lib/nfs/rpc_pipefs rpc_pipefs rpc_pipefs rw,relatime ├─/tmp tmpfs tmpfs rw,nosuid,nodev,seclabel,size=32877144k,nr_inodes=409600,inode ├─/var/tmp vartmp tmpfs rw,relatime,seclabel,inode64 └─/mnt /dev/mapper/centos-home xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noqu [liveuser@localhost-live ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 1.8G 1 loop loop1 7:1 0 7.5G 1 loop ├─live-rw 253:6 0 7.5G 0 dm / └─live-base 253:7 0 7.5G 1 dm loop2 7:2 0 32G 0 loop └─live-rw 253:6 0 7.5G 0 dm / sda 8:0 0 447.1G 0 disk ├─sda1 8:1 0 200M 0 part ├─sda2 8:2 0 1G 0 part └─sda3 8:3 0 445.9G 0 part ├─centos-swap 253:0 0 31.4G 0 lvm ├─centos-home 253:1 0 364.5G 0 lvm └─centos-root 253:2 0 50G 0 lvm sdb 8:16 0 447.1G 0 disk └─sdb1 8:17 0 447.1G 0 part sdc 8:32 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sdd 8:48 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sde 8:64 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sdf 8:80 1 3.6G 0 disk ├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live ├─sdf2 8:82 1 9.9M 0 part └─sdf3 8:83 1 20.9M 0 part sr0 11:0 1 2K 0 rom zram0 252:0 0 8G 0 disk [SWAP] nvme1n1 259:0 0 953.9G 0 disk ├─nvme1n1p1 259:1 0 953M 0 part ├─nvme1n1p2 259:2 0 46.6G 0 part │ ├─nvmeVg-var 253:3 0 44G 0 lvm │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p3 259:3 0 46.6G 0 part │ ├─nvmeVg-home 253:4 0 181G 0 lvm │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p4 259:4 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p5 259:5 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p6 259:6 0 46.6G 0 part │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p7 259:7 0 46.6G 0 part │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p8 259:8 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p9 259:9 0 46.6G 0 part ├─nvme1n1p10 259:10 0 46.6G 0 part ├─nvme1n1p11 259:11 0 46.6G 0 part └─nvme1n1p12 259:12 0 1G 0 part nvme0n1 259:13 0 931.5G 0 disk --- Logical volume --- LV Path /dev/nvmeVg/var LV Name var VG Name nvmeVg LV UUID 9WAde0-jcOC-ymG3-petc-cqjX-dBdS-fi4fXM LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-25 18:37:42 -0500 LV Status available # open 0 LV Size 44.00 GiB Current LE 11264 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Logical volume --- LV Path /dev/nvmeVg/home LV Name home VG Name nvmeVg LV UUID zdQoid-kIS8-98bk-BncS-eLvf-fTD8-t8cVQ9 LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-25 22:53:20 -0500 LV Status available # open 0 LV Size 181.00 GiB Current LE 46336 Segments 7 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 --- Logical volume --- LV Path /dev/nvmeVg/root LV Name root VG Name nvmeVg LV UUID NcQmu9-17Kn-yBlu-PrzZ-xcyP-kDjm-afgKYI LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-27 00:34:57 -0500 LV Status available # open 0 LV Size 100.00 GiB Current LE 25600 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5 [root@localhost-live liveuser]# vgdisplay --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 445.93 GiB PE Size 4.00 MiB Total PE 114159 Alloc PE / Size 114158 / <445.93 GiB Free PE / Size 1 / 4.00 MiB VG UUID h3Rhh8-1jGr-ylLe-Hagr-vJ8h-fibH-PxYOye --- Volume group --- VG Name nvmeVg System ID Format lvm2 Metadata Areas 7 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 0 Max PV 0 Cur PV 7 Act PV 7 VG Size <325.94 GiB PE Size 4.00 MiB Total PE 83440 Alloc PE / Size 83200 / 325.00 GiB Free PE / Size 240 / 960.00 MiB VG UUID sM2ZQz-ke7H-543U-EylK-pO25-0G6S-jhV57f [root@localhost-live liveuser]# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name centos PV Size 445.93 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 114159 Free PE 1 Allocated PE 114158 PV UUID OjAFDa-Il7s-Vj0h-Lian-culw-97um-9GYjOo --- Physical volume --- PV Name /dev/nvme1n1p2 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 11920 Free PE 240 Allocated PE 11680 PV UUID M1em0l-TY0y-ZuIt-DK2i-0yJp-OHNz-7RfupC --- Physical volume --- PV Name /dev/nvme1n1p3 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID qkaPsI-FLzs-wt4Y-bnhm-BpGK-aOcR-fheulP --- Physical volume --- PV Name /dev/nvme1n1p4 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID CTkIFV-Ebvf-Ps5w-rysY-s7U0-VLhs-6jLVRV --- Physical volume --- PV Name /dev/nvme1n1p5 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID Sjii2Q-zkwB-9Nhb-0g6o-4rt3-O9gy-4CMtEI --- Physical volume --- PV Name /dev/nvme1n1p6 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID QLUYbk-TzNY-RZHz-ck60-gbqA-kPtk-QT2Tm4 --- Physical volume --- PV Name /dev/nvme1n1p7 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID nQg41G-8A3m-wMog-LBzJ-U09n-W1md-lgVEdQ --- Physical volume --- PV Name /dev/nvme1n1p8 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID D5HOGp-nLA3-zypn-edIj-uPon-Pzrj-N6JcB5 "/dev/nvme1n1p1" is a new physical volume of "953.00 MiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p1 VG Name PV Size 953.00 MiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID CjuOUt-h2bH-EjCp-ALwd-c8BW-ZckJ-cpB322 "/dev/nvme1n1p10" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p10 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 0XEQEc-pHGc-2B02-d4lp-581f-ZMYv-vKTgpG "/dev/nvme1n1p11" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p11 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID NF82AB-ZUaP-D9FF-PLVP-HMuA-pWFz-NIZFRG [root@localhost-live liveuser]# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name centos PV Size 445.93 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 114159 Free PE 1 Allocated PE 114158 PV UUID OjAFDa-Il7s-Vj0h-Lian-culw-97um-9GYjOo --- Physical volume --- PV Name /dev/nvme1n1p2 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 11920 Free PE 240 Allocated PE 11680 PV UUID M1em0l-TY0y-ZuIt-DK2i-0yJp-OHNz-7RfupC --- Physical volume --- PV Name /dev/nvme1n1p3 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID qkaPsI-FLzs-wt4Y-bnhm-BpGK-aOcR-fheulP --- Physical volume --- PV Name /dev/nvme1n1p4 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID CTkIFV-Ebvf-Ps5w-rysY-s7U0-VLhs-6jLVRV --- Physical volume --- PV Name /dev/nvme1n1p5 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID Sjii2Q-zkwB-9Nhb-0g6o-4rt3-O9gy-4CMtEI --- Physical volume --- PV Name /dev/nvme1n1p6 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID QLUYbk-TzNY-RZHz-ck60-gbqA-kPtk-QT2Tm4 --- Physical volume --- PV Name /dev/nvme1n1p7 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID nQg41G-8A3m-wMog-LBzJ-U09n-W1md-lgVEdQ --- Physical volume --- PV Name /dev/nvme1n1p8 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID D5HOGp-nLA3-zypn-edIj-uPon-Pzrj-N6JcB5 "/dev/nvme1n1p1" is a new physical volume of "953.00 MiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p1 VG Name PV Size 953.00 MiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID CjuOUt-h2bH-EjCp-ALwd-c8BW-ZckJ-cpB322 "/dev/nvme1n1p10" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p10 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 0XEQEc-pHGc-2B02-d4lp-581f-ZMYv-vKTgpG "/dev/nvme1n1p11" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p11 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID NF82AB-ZUaP-D9FF-PLVP-HMuA-pWFz-NIZFRG |
SSH to Cisco device fails with diffie-hellman-group1-sha1 Posted: 28 Oct 2021 10:51 AM PDT When trying to SSH from my Debian box to a Cisco router, I got the message: Unable to negotiate with 192.168.1.1 port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1 There are some similar questions on this forums which claim to have the answer however I found them to not work for me due to small differences so I decided to post the question and answer here. |
Grub's default kernel priority Posted: 28 Oct 2021 09:28 AM PDT I recently installed Arch on one of my machines. I installed grub in UEFI mode. While setting up Arch, I had installed linux-lts . I used it for some days, and later decided to use both LTS and regular kernel. So, I installed linux (regular) package. After its installation, I assumed grub to boot into the latest linux . But, it continued to boot into older linux-lts . I tried to regenerate initramfs and updated grub a few times but didn't succeed. To get grub to boot in latest linux , I had to edit grub menu entries using grub-customizer . Is this normal behavior of grub ? I had read somewhere that grub actually prioritize latest kernel if found and boots in it directly. Then, in my case why is this different ? Have I misconfigured something ? |
Enable Gnome Screen Sharing via Commandline? Posted: 28 Oct 2021 10:06 AM PDT I set up a automated kickstart-installation for a "digital-signage-client" based on Fedora 30 (soon 32), now I want to add the enabling of the "Gnome Screen Sharing" to the installation to be able to get an actual visual feedback what is on the screen right now. I got this to work via the settings in the GUI (Settings - Sharing - Screen Sharing) and I'm also able to set the "subsettings" via gesettings, e.g. gsettings set org.gnome.desktop.remote-desktop.vnc view-only false gsettings set org.gnome.desktop.remote-desktop.vnc auth-method 'password' But I wasn't able to find the setting to enable the "Screen Sharing" itself. When I enable it via the GUI, I can see via systemctl status : systemctl status | grep gnome-remote | grep -v grep │ │ ├─gnome-remote-desktop.service │ │ │ └─5572 /usr/libexec/gnome-remote-desktop-daemon I tried to start this service and also the "daemon" directly with systemctl start, but it only results in Failed to start gnome-remote-desktop-daemon.service: Unit gnome-remote-desktop-daemon.service not found. There are two quite similar questions, but the seem outdated, because I don't have a schema "org.gnome.Vino": So: How can I enable Gnome Screen Sharing via Commandline? Addition: I've invested a lot of time to get this to work and could solve all but one problem. I now know, that I have to start the service as User, so my whole procedure is: # Configuration gsettings set org.gnome.desktop.remote-desktop.vnc auth-method 'password' gsettings set org.gnome.desktop.remote-desktop.vnc view-only false gsettings set org.gnome.settings-daemon.plugins.sharing.service:/org/gnome/settings-daemon/plugins/sharing/gnome-remote-desktop/ enabled-connections "['$( grep UUID /etc/sysconfig/network-scripts/ifcfg-enp1s0 | cut -d= -f2)']" # Start the Remote-Desktop-Service systemctl start --user gnome-remote-desktop I set it to "password" to not have someone to click on "accept", "view-only" to "false" to be able to control it and set the UUID of my network-interface. Afterwards I can start the service correctly configured. So the last missing step is, that I'm not able to set the password via the commandline. I tried it like for vino and also with secret-tool, but it doesn't work gsettings set org.gnome.Vino vnc-password $(echo -n "myPassword"|base64) secret-tool store --label='Label' {attribute} {value} The problem with secret-tool is maybe, that the original entry in the Gnome keyring doesn't has a "attribute" and a "value", but those are mandatory for secret tool, so I can't reproduce the entry 1:1. So: Has someone an idea, how I can set the password for gnome-screen-sharing correctly via cli? |
Linux Mint 19.1->19.2 upgrade broke Cinnamon - driver issue? Posted: 28 Oct 2021 09:04 AM PDT Updating broke Cinnamon for me. I was on 19.1 Tessa, Cinnamon edition, and updated to 19.2 through the Update Manager. Now when I boot I get the message "Cinnamon just crashed. You are currently running in Fallback Mode" and I have the option of restarting Cinnamon, but it immediately crashes again. I've tried with kernel versions 5.0.0.23, 5.0.0-20, and 4.15.0-55. My graphics card is an RX480. I tried running cinammon --replace in the terminal. The output can be found here. I tried booting into recovery mode then selecting the option to continue into a normal boot. I then got a message saying that "Your system is currently running without video hardware acceleration." However, Cinnamon is working fine. This is Mint 19.2, Cinnamon version 4.2.3, and kernel version 5.0.0-23-generic. This is the output of running cinnamon --replace on the working Cinnamon desktop I get from booting through recovery mode. Conspicuously, the crashing version has the line cinnamon: ../src/gallium/drivers/radeonsi/si_state_viewport.c:239: si_emit_guardband: Assertion `left <= -1 && top <= -1 && right >= 1 && bottom >= 1' failed. where the working version is starting to add systrays. Finally, I tried booting normally but selecting Cinnamon with software rendering at the login screen. It worked fine. Then I tried to restart Cinnamon and got this output with the same error message with the driver. It definitely seems like a driver issue, but I don't know how to resolve it. Edit: this is the subject of a bug discussion on Github. It seems to have to do with the position of monitors in a multi-monitor setup. |
img2pdf batch script Posted: 28 Oct 2021 10:34 AM PDT I currently have over 10K folders. Each folder has .png and .jpg images that need to be turned into .pdf files. Files are numerically in order already. I am looking for speed. I can currently cd into each folder and run img2pdf * -o out.pdf And I get a perfectly created out.pdf in less than a second, even with several hundred images. My end goal is to automate this in a shell script that can be invoked, so each folder is basically turned into a .pdf file with the filename the same as the old directory like so Directory001/img001.jpg img002.jpg img003.jpg OtherDirectory/img1.png img2.png img3.png becomes Directory001.pdf OtherDirectory.pdf while only use img2pdf to do this, as it is by far the fastest way create the pdfs. I have some simple bash experience, but only with simple one liners (like turning all directories to zips). I know this can be done, but have no idea where to begin. |
Disable GDM suspend on lock screen Posted: 28 Oct 2021 09:38 AM PDT I'm using Arch Linux + GNOME3 on desktop, and when the system starts or the user logs out, gdm displays the login screen for about 20 seconds and then turns off the display (although the computer is still running). Is it possible to disable this? I want the monitor to keep displaying the login screen "forever". I couldn't find any way to configure this. |
How to assign variable in awk Posted: 28 Oct 2021 10:00 AM PDT I have a code like below: $ awk -F'[]]|[[]' \ '$0 ~ /^\[/ && $2 >= "2014-04-07 23:00" { p=1 } $0 ~ /^\[/ && $2 >= "2014-04-08 02:00:01" { p=0 } p { print $0 }' log Here I want to assign "2014-04-07" in one variable "23:00" in another variable and so on. And get the input from their values ( using $var or something like that ). Can someone help me to modify the above code so I can use variable instead hardcording the timestamp. Below is the link for original question How to extract logs between two time stamps. |
No comments:
Post a Comment