Shell commands inside neovim, which require access to /dev/tty, don't work in bubblewrap container Posted: 11 May 2021 09:48 AM PDT In order to have two different neovim versions with two different configs side-by-side, I decided to use bubblewrap. Using bubblewrap, I was able to create the following script, which mounts ~/.config/nvim-nightly on top of ~/.config/nvim : #!/usr/bin/env sh nvim_stable_config="$XDG_CONFIG_HOME/nvim" nvim_stable_data="$XDG_DATA_HOME/nvim" nvim_nightly_path="$HOME/clones/neovim" nvim_nightly_runtime="$nvim_nightly_path/runtime" nvim_nightly_config="$XDG_CONFIG_HOME/nvim-nightly" nvim_nightly_data="$XDG_DATA_HOME/nvim-nightly" nvim="$nvim_nightly_path/build/bin/nvim" export VIMRUNTIME="$nvim_nightly_runtime" bwrap \ --bind / / \ --dev /dev \ --bind "$nvim_nightly_config" "$nvim_stable_config" \ --bind "$nvim_nightly_data" "$nvim_stable_data" \ "$nvim" "$@" Overall, it works well, but I'm not able to update configuration in neovim inside container with :!chezmoi apply . It exits with following error: :!chezmoi apply chezmoi: open /dev/tty: no such device or address shell returned 1 Press ENTER or type command to continue I have noticed, that fzf also returns similar error: :!fzf Failed to open /dev/tty shell returned 2 Press ENTER or type command to continue Is it possible to allow access to /dev/tty for neovim inside bubblewrap? |
How can i parallelize command sha256sum or other hashing commands? Posted: 11 May 2021 09:04 AM PDT i wanna paralellize hash calculation process, because i have very large amount of file count and size. When i see cpu usage of these commands i upset because they are only using one thread, how can i parallelize these? sha256sum foo.mp4 OR openssl -dgst sha256 foo.mp4 |
Swap super and ctrl keys in xmodmap Posted: 11 May 2021 08:56 AM PDT I'm trying to swap my ctrl and super keys. I currently have successfully mapped the super keys to ctrl, now I need to map the ctrl keys to super, but I can't find a way to do so. Current code: ! Map both super to ctrl remove mod4 = Super_R add control = Super_R remove mod4 = Super_L add control = Super_L ! Map both ctrl to super ! ? Thanks in advance. |
How to generate the last 1 hour in seconds in unixtime? Posted: 11 May 2021 09:15 AM PDT I'm wondering if there is a way of generating the unixtime seconds for the past hour. So 3600 timestamps. Is there a quick date command? |
How to copy a file into another file at a certain position using the command dd? Posted: 11 May 2021 09:07 AM PDT I want to copy a file of 256 bytes at a certain position into another file of size 2048 bytes containing random data, with the command dd in Linux. The offset I have is 144 . I have assumed that bs=1 , count=256 , and seek=144 . So this is the command I run: dd if=file1.data of=file2.data bs=1 count=256 seek=144 However when I run this, the file2.data which should still be of size 2048 bytes decreases. Could someone help me figuring out why my command is wrong and how I can make sure that the file is placed at the correct position? |
Ubuntu Xauthority File and Login Loop Posted: 11 May 2021 08:40 AM PDT I am not able to login in my ubuntu(18.4). It takes back to the login screen. I have followed instructions in the other answers, and they all suggest to change the ownership of xAuthority file. However, I don't have Xauthority file. What should I do now, please help? Thanks |
Battery / temperature-optimized custom kernels (opposite of liquorix)? Posted: 11 May 2021 08:32 AM PDT I've been looking for an anti-performance-tuned kernel on the webs, but I can't really find any, or even a list of custom kernel builds to choose from. I've tried to search for liquorix + xanmod , because those are the only distributions I know, but it doesn't turn up anything useful. So, is there a relevant list of custom kernels, or more specifically a heat / battery-optimized build? |
date validation for a custom format in shell Posted: 11 May 2021 08:17 AM PDT i am writing a generic script for the custom format date validation . here is the script dateformat=$1 d=$2 date "+$dateformat" -d "$d" > /dev/null 2>&1 if [ $? != 0 ] then echo "Date $d NOT a valid YYYY-MM-DD date" exit 1 fi issue is sh -x poc_col_val_date.sh "%Y-%m-%d" "2019-11-09" expected is valid date, output also correct sh -x poc_col_val_date.sh "%d-%m-%Y" "2019-11-09" expected is invalid date, output is valid date |
(reference) Where can I learn more about I/O in Linux? [closed] Posted: 11 May 2021 08:21 AM PDT I have basic understanding of file I/O. I know how to operate the read and write system calls in Linux. I know that there is blocking and non-blocking I/O. I want to know more about the underlying mechanism in OS for I/O. For example: How is blocking and non-blocking I/O implemented? How do the kernel I/O buffers work? How big are they and when are they "erased"? For example if there is an open socket how much data will be kept in the kernel buffer before the data is overwritten? I prefer a book or blog post of some kind. And please don't tell me to go and read the source. I know that if I REALLY want to know how those things work that is the best I can do, but I simply am unable to dedicate so much time to this subject. |
Pausing and then resuming a piped command Posted: 11 May 2021 07:06 AM PDT I'm downloading a lot of data for my research. The data is being downloaded on one of my campus's supercomputers, but data downloads are interrupted every hour. When the OS pauses the pipeline, I have to delete all of the lines of the text file that represent the files that have already been downloaded. Not hard, but annoying and I would prefer not to do that. Here is how I am downloading everything cat subset.txt | tr -d '\r' | xargs -P 4 -n 1 curl -LJO -s -n --globoff -c ~/.urs_cookies -b ~/.urs_cookies Each url is passed to curl and xargs gives me 4 parallel downloads. Is there a way to pause the entire pipeline and continue the pipeline later on? |
How to merge two files using awk? Posted: 11 May 2021 07:18 AM PDT There was a two file which is Pipe delimiter file, in that one field is common in both files. For example File1= A|B File2= A|C|D|E|F|G|H Output= A|B|C|D|E|F|G|H If common fields present means we need to print A|B|C|D|E|F|G|H like this. But if common field are not present it means we need to write like this A|B||||||| |
/dev/sda **sometimes** confused with /dev/sdb by smartctl Posted: 11 May 2021 08:55 AM PDT I have a script - run from cron.daily that gathers SMART stats from two identical SATA SSD's. However, smartctl -A /dev/sda sometimes returns the stats for /dev/sdb - and if does so smartctl -A /dev/sdb returns the stats for /dev/sdb. However, sometimes it gets it right! The system boots into / on a M2 nvme0n1 with /home on one of the SATA SSD's and all filesystems are mounted via fstab using UUID references. I have tried inserting random sleep commands - but this makes no difference. The output of smartctl doesn't include any notification of what it is the output of - example output:- smartctl 6.6 2017-11-05 r4594 [x86_64-linux-5.10.0-0.bpo.5-amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 2396 . . . uname -a Linux hal 5.10.0-0.bpo.5-amd64 #1 SMP Debian 5.10.24-1~bpo10+1 (2021-03-29) x86_64 GNU/Linux Here is the script, which writes all the output as a single CSV line to a log file. #!/bin/sh # SMART DISK PROCESSING # ===================== tmpfile=$(mktemp -q) today=$(date -u +%d-%m-%Y) smartctl -A /dev/sdb > $tmpfile # Output log as a single line - note "Unknown_Attribute" is "POR_Recovery_Count" [unexpected shutdown] echo -n $today ', ' >> /var/log/disk-monitor.d/sdb-errors.csv awk 'NR>=8 && NR<=21 {print $1,",",$2,",",$10,",";}' $tmpfile | tr -d '\n' | sed 's/Unknown_Attribute/POR_Recovery_Count/;s/\,$/\n/' >> /var/log/disk-monitor.d/sdb-errors.csv #------------------------------ smartctl -A /dev/sda > $tmpfile # Output log as a single line - note "Unknown_Attribute" is "POR_Recovery_Count" [unexpected shutdown] echo -n $today ', ' >> /var/log/disk-monitor.d/sda-errors.csv awk 'NR>=8 && NR<=21 {print $1,",",$2,",",$10,",";}' $tmpfile | tr -d '\n' | sed 's/Unknown_Attribute/POR_Recovery_Count/;s/\,$/\n/' >> /var/log/disk-monitor.d/sda-errors.csv exit 0 |
List only sub-directories containing two specific files Posted: 11 May 2021 09:03 AM PDT I'm running the following code on iOS using my iPhone's terminal, to be clear, this command is run within my jailbroken iphone using a slim terminal tweak called New Term 2: cd /var/mobile/Library/Widgets find . -maxdepth 3 -name 'index.html' -printf "%h\n" This returns the list of the folders containing index.html . I'd like to know how to add another file: Config_extra.js (if it exists, it'll be located in the same folder as index.html) to the search in a way that the results show only folders containing both files Thanks in advance |
AWK: how can I tell where column begins Posted: 11 May 2021 09:14 AM PDT After parsing the input line, awk provides access to the original line ($0 ) as well as to each individual column ($1 , $2 , ...). While performing this process (lazily, on demand) - it knows exactly the position of the character where the 2nd column starts. - Does it provide access to this info (i.e., at what position in the original line $0 does the 2nd column start)?
- If not - is there any sane/elegant way of finding it out properly? (I'm about to start coding an ugly and inefficient way of mimicking awk's internal behavior by using dynamic-regexps based on
FS , handling special FS==" " case, using capturing groups, etc. But wanted your advice before I dive deep into it.) Example 1 (default FS): $ echo -n -e " \tFirst \t\t Second \t Third \t"\ |awk -F" " '{print "FS:["FS"]";for(i=0;i<=5;i++)if(""!=$i)print "$"i":["$i"]"}'\ |sed 's/\t/\\t/g' FS:[ ] $0:[ \tFirst \t\t Second \t Third \t] $1:[First] $2:[Second] $3:[Third] in here - I need to know that the 2nd column (Second ) starts with the letter S and this is the 13th character in the input line (so I would be able to store First as the key, and preserve/store the Second \t Third \t intact as the value for the further use) Example 2 (TAB as FS): $ echo -n -e " \tFirst \t\t Second \t Third \t"\ |awk -F"\t" '{print "FS:["FS"]";for(i=0;i<=5;i++)if(""!=$i)print "$"i":["$i"]"}'\ |sed 's/\t/\\t/g' FS:[\t] $0:[ \tFirst \t\t Second \t Third \t] $1:[ ] $2:[First ] $4:[ Second ] $5:[ Third ] in here - I need to know that the 2nd column (First ) starts with the letter F and this is the 3rd character in the input line - so I would be able to store (space) as the key, and preserve/store First \t\t Second \t Third \t intact as the value for the further use Example 3 (custom FS): $ echo -n -e " \tFirst \t\t Second \t Third \t"\ |awk -F"[ \t]+" '{print "FS:["FS"]";for(i=0;i<=5;i++)if(""!=$i)print "$"i":["$i"]"}'\ |sed 's/\t/\\t/g' FS:[[ \t]+] $0:[ \tFirst \t\t Second \t Third \t] $2:[First] $3:[Second] $4:[Third] in here - I need to know that the 2nd column (First ) starts with the letter F and this is the 3rd character in the input line - so I would know the 1st column is an empty string, and store the First \t\t Second \t Third \t as the value for the further use Example 4 (complex FS): $ echo "-11...22;,;..;33-44...;"\ |awk -F"[^0-9-]+" '{print "FS:["FS"]";for(i=0;i<=5;i++)if(""!=$i)print "$"i":["$i"]"}' FS:[[^0-9-]+] $0:[-11...22;,;..;33-44...;] $1:[-11] $2:[22] $3:[33-44] in here - I need to know that the 2nd column (22 ) starts with the character 2 and this is the 7th character in the input line - so I would be able to store -11 as the key, and 22;,;..;33-44...; as the value for the further use Basically the idea is to grab some (1st) columns for a custom use and to preserve (store into a variable) the remainder of the line (from 2nd column till end of line) intact. |
Game dedicated server application crash/lag after some time on Ubuntu 18.04 Posted: 11 May 2021 07:31 AM PDT I've been running a game server application which, after some time (hours of running even full without any kind of problem), crashes with all the people connected to it (it just lags for one/two minutes making most people disconnect although the application stays open, recovering itself for new people connections thereafter). The server is running on a VPS with one vCore, 500 MB RAM and 400 mbps network bandwidth. I've monitored the resources, and when game server is full, CPU is working at 50%, while the RAM always at about 30%. Upload consuming about 10 mbps. All ports are forwarded through provider panel (both TCP and UDP ports). It's the Assetto Corsa dedicated server which I'm talking about. Is there something related to tcp keep alive parameters to set at system level? Here there are the logs at the moment of crash. 2021-05-10 14:47:38,480: PAGE: /JSON|76561199107477778 2021-05-10 14:47:38,657: PAGE: /JSON|76561198295534738 2021-05-10 14:47:38,827: ERROR on SendTCPPacket: write tcp ipxxx:9722->ipxxx:43602: write: connection timed out 2021-05-10 14:47:38,827: ERROR on SendTCPPacket: write tcp ipxxx:9722->ipxxx:56621: write: broken pipe 2021-05-10 14:47:38,827: ERROR on SendTCPPacket: write tcp ipxxx:9722->ipxxx:63440: write: broken pipe many many more for 1/2 minutes EDIT: I've seen also a no route to host error added to connection timeout and broken pipe. How can I fix this? EDIT 2: now I've forwarded ports with UFW (before it was disabled), and set allow any ports in IONOS firewall. It should disable their firewall as they've said, could it be the cause of tcp timeouts? |
Only Flatpak apps can open all websites after connecting to VPN Posted: 11 May 2021 09:41 AM PDT I'm using PureVPN app (openvpn/udp). Installed via rpm file. After connecting, If I ping youtbue.com it shows: --- youtube.com ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6149ms but If I ping google.com it will work without any problem. Some websites work and some not. If I go to the browsers (Firefox and Brave installed via RPM) they cannot open youtube.com. Now if I use the flatpak browsers it will open youtube.com without any problem. I even tried GNOME boxes which is installed as Flatpak and used the same Fedora 34 as host and it will open youtube.com. What is the problem? how can I fix it? ifconfig ... tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet xxx.xx.xx.xxx netmask 255.255.255.224 destination xxx.xx.xxx.xx unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 103978 bytes 135509956 (129.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 61682 bytes 4764217 (4.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... journalctl -fu NetworkManager.service May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2142] manager: (tun0): new Tun device (/org/freedesktop/NetworkManager/Devices/19) May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2355] device (tun0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2422] device (tun0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2437] device (tun0): Activation: starting connection 'tun0' (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx) May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2444] device (tun0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2453] device (tun0): state change: prepare -> config (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2529] device (tun0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2535] device (tun0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2559] device (tun0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2564] device (tun0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') May 09 20:31:49 fedora NetworkManager[844]: <info> [1620617509.2585] device (tun0): Activation: successful, device activated. traceroute youtube.com traceroute to youtube.com (10.10.34.35), 30 hops max, 60 byte packets 1 _gateway (172.16.148.161) 110.442 ms 112.015 ms 112.743 ms 2 92.223.89.1 (92.223.89.1) 114.433 ms 115.302 ms 116.738 ms 3 10.255.8.182 (10.255.8.182) 118.095 ms 119.022 ms 10.255.8.181 (10.255.8.181) 120.942 ms 4 10.255.8.177 (10.255.8.177) 122.864 ms 124.759 ms 125.409 ms 5 149.6.67.153 (149.6.67.153) 127.591 ms 128.780 ms 129.219 ms 6 * * * 7 * * * 8 * * * purevpn log: Tue May 11 09:32:39 2021 WARNING: file '/etc/purevpn/login.conf' is group or others accessible Tue May 11 09:32:39 2021 OpenVPN 2.4.4 x86_64-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Dec 20 2017 Tue May 11 09:32:39 2021 library versions: OpenSSL 1.0.2g 1 Mar 2016, LZO 2.08 Tue May 11 09:32:39 2021 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Tue May 11 09:32:39 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]92.223.89.8:53 Tue May 11 09:32:39 2021 UDP link local: (not bound) Tue May 11 09:32:39 2021 UDP link remote: [AF_INET]92.223.89.8:53 Tue May 11 09:32:39 2021 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Tue May 11 09:32:40 2021 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1557', remote='link-mtu 1550' Tue May 11 09:32:40 2021 WARNING: 'cipher' is used inconsistently, local='cipher AES-256-CBC', remote='cipher AES-256-GCM' Tue May 11 09:32:40 2021 WARNING: 'auth' is used inconsistently, local='auth SHA1', remote='auth [null-digest]' Tue May 11 09:32:40 2021 WARNING: 'comp-lzo' is present in remote config but missing in local config, remote='comp-lzo' Tue May 11 09:32:40 2021 [Secure-Server] Peer Connection Initiated with [AF_INET]92.223.89.8:53 Tue May 11 09:32:41 2021 TUN/TAP device tun0 opened Tue May 11 09:32:41 2021 do_ifconfig, tt->did_ifconfig_ipv6_setup=0 Tue May 11 09:32:41 2021 /sbin/ifconfig tun0 xxx.xx.xxx.xxx netmask 255.255.255.224 mtu 1500 broadcast 172.16.148.191 Tue May 11 09:32:41 2021 /etc/purevpn/pure-resolv-conf tun0 1500 1552 172.16.148.163 255.255.255.224 init dhcp-option DNS 92.223.89.10 dhcp-option DNS 92.223.89.12 Tue May 11 09:32:41 2021 Initialization Sequence Completed |
Can't find logs produced from system lockup after reboot, looked everywhere I know Posted: 11 May 2021 07:51 AM PDT I'm running Ubuntu Server 20.04, and my system recently locked up during a high intensity workload. I couldn't see what was happening in real time over ssh because of I got kicked out, but I had a monitor connected and saw some very useful messages blaming an application flying by. I can't find those logs anywhere though. I checked the journal, dmesg, and everyhing in /var/log . There is nothing in /var/crash . They looked like dmesg-style messages with a timestamp like [ 10.286001] leading each line. |
alias using `$1` and fallback default value prints both the param and the fallback value Posted: 11 May 2021 08:19 AM PDT I want to create an alias that can handle parameters ($1 ), and can fall back to a default value if a parameter is not provided. For example, $ alias foo='NUM=${1:-42}; echo $NUM' Invoked without params it works as I want: $ foo 42 But invoked with a param, it prints both my value and default value: $ foo 69 42 69 I don't understand why it's this way. How should it be done properly? How can I debug this kind of problem myself? |
Why I can't load signed VirtualBox kernel modules in Debian with SecureBoot enabled? Posted: 11 May 2021 08:58 AM PDT With Debian testing and SecureBoot enabled: I need to sign VirtualBox modules, as the output of the vboxconfig command says: vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Starting VirtualBox services. vboxdrv.sh: You must sign these kernel modules before using VirtualBox: vboxdrv vboxnetflt vboxnetadp See the documenatation for your Linux distribution.. vboxdrv.sh: Building VirtualBox kernel modules. vboxdrv.sh: failed: modprobe vboxdrv failed. Please use 'dmesg' to find out why. There were problems setting up VirtualBox. To re-start the set-up process, run /sbin/vboxconfig as root. If your system is using EFI Secure Boot you may need to sign the kernel modules (vboxdrv, vboxnetflt, vboxnetadp, vboxpci) before you can load them. Please see your Linux system's documentation for more information. Following the Debian Wiki about SecureBoot I did: # openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -days 36500 -subj "/CN=My Name/" -nodes # mokutil --import MOK.der // prompts for one-time password # mokutil --list-new // recheck your key will be prompted on next boot <rebooting machine then enters MOK manager EFI utility: enroll MOK, continue, confirm, enter password, reboot> # dmesg | grep cert // verify your key is loaded and signed the modules: # /usr/src/linux-headers-5.7.0-1-amd64/scripts/sign-file sha256 /root/MOK.priv /root/MOK.der /lib/modules/5.7.0-1-amd64/misc/vboxdrv.ko # /usr/src/linux-headers-5.7.0-1-amd64/scripts/sign-file sha256 /root/MOK.priv /root/MOK.der /lib/modules/5.7.0-1-amd64/misc/vboxnetflt.ko # /usr/src/linux-headers-5.7.0-1-amd64/scripts/sign-file sha256 /root/MOK.priv /root/MOK.der /lib/modules/5.7.0-1-amd64/misc/vboxnetadp.ko Note: I didn't signed the module vboxpci becuse with sudo modinfo -n vboxpci it can't be finded: modinfo: ERROR: Module vboxpci not found. After that if I try to execute again vboxconfig (as root too) I have the same result, as it can't be loads modules: vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Starting VirtualBox services. vboxdrv.sh: You must sign these kernel modules before using VirtualBox: vboxdrv vboxnetflt vboxnetadp See the documenatation for your Linux distribution.. vboxdrv.sh: Building VirtualBox kernel modules. vboxdrv.sh: failed: modprobe vboxdrv failed. Please use 'dmesg' to find out why. There were problems setting up VirtualBox. To re-start the set-up process, run /sbin/vboxconfig as root. If your system is using EFI Secure Boot you may need to sign the kernel modules (vboxdrv, vboxnetflt, vboxnetadp, vboxpci) before you can load them. Please see your Linux system's documentation for more information. NOTE: If I try to load module myself with sudo modprobe vboxdrv I have an error too that says: modprobe: ERROR: could not insert 'vboxdrv': Operation not permitted And dmesg command says that the modules aren't signed: [ 35.668028] Lockdown: modprobe: unsigned module loading is restricted; see https://wiki.debian.org/SecureBoot [ 59.965757] Lockdown: modprobe: unsigned module loading is restricted; see https://wiki.debian.org/SecureBoot [ 247.249605] Lockdown: modprobe: unsigned module loading is restricted; see https://wiki.debian.org/SecureBoot How can I do??? Without disable SecureBoot? |
sudo apt --fix-broken install ERROR Posted: 11 May 2021 09:05 AM PDT I am trying to do sudo apt-get install kali-Linux-full I'm running into this error that's preventing me from going further. I've tried to clean the file and also remove it but I don't have permissions even though I'm Root. New to Linux so any help, tips, or tricks would be really appreciated. root@host:~$ sudo apt-get install kali-linux-full Reading package lists... Done Building dependency tree Reading state information... Done kali-linux-full is already the newest version (2020.1.0). You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: kali-linux-large : Depends: jsql-injection but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). root@host:~$ sudo apt --fix-broken install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: freeglut3 gir1.2-notify-0.7 gir1.2-packagekitglib-1.0 gir1.2-polkit-1.0 gir1.2-secret-1 ibverbs-providers libhwloc5 libibverbs1 libpackagekit-glib2-18 Use 'sudo apt autoremove' to remove them. The following additional packages will be installed: jsql-injection The following NEW packages will be installed: jsql-injection 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 1643 not fully installed or removed. Need to get 0 B/1,982 kB of archives. After this operation, 2,257 kB of additional disk space will be used. Do you want to continue? [Y/n] Y E: Invalid archive signature E: Internal error, could not locate member control.tar.{zstlz4gzxzbz2lzma} E: Prior errors apply to /var/cache/apt/archives/jsql-injection_0.81-0kali2_all.deb debconf: apt-extracttemplates failed: No such file or directory dpkg-deb: error: '/var/cache/apt/archives/jsql-injection_0.81-0kali2_all.deb' is not a Debian format archive dpkg: error processing archive /var/cache/apt/archives/jsql-injection_0.81-0kali2_all.deb (--unpack): dpkg-deb --control subprocess returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/jsql-injection_0.81-0kali2_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@host:~$ |
Remove single line from journalctl file Posted: 11 May 2021 09:14 AM PDT I have an Ubuntu 18.04 server that is running a service I'm developing. The output is being sent to the system journal for logging. I accidentally failed to sanitize some logging and a plaintext password (for my own user) was accidentally leaked in the logs. I have fixed the service's logging behavior. Now I simply want to edit the journal files to remove the lines with the plaintext password. How do I edit a journalctl file? |
How to run multiple sed commands on a list of files using find output piped through xargs Posted: 11 May 2021 08:38 AM PDT This is what I tried: find . -name *.sv | xargs sed -i -e '<command>' -e '<command>' It does not work. Using the exact same command on each file still works. Thanks for the help. |
Can't monitor Kickstart post-install log Posted: 11 May 2021 09:30 AM PDT I'm installing Scientific Linux 7 (I've got no reason to this isn't the case with all RHEL forks though) with a Kickstart script that contains the following: %post --interpreter /bin/bash --log /root/postinstall.log # do stuff %end After install, the log file is there for inspection as expected. But, using SL 6 I used to be able to change to TTY 2 and watch the log with tail -f /mnt/sysimage/root/postinstall.log . Now, it appears the log is created, but contents are not written until the post-install process is completed. Is there a way to monitor this progress? I've looked for the log file in /tmp/ , /var/log/ , /mnt/sysimage/tmp/ , and /mnt/sysimage/var/log/ without any luck. If the log file isn't available, is there a way to send output to another TTY from a Kickstart post-install script? Attempt 1: %post --interpreter /bin/bash ( # do stuff echo foo echo bar echo baz ) | tee /root/postinstall.log > /dev/tty1 %end This almost works, however, line endings seem to be a problem. It's only doing an LF, not a CR on the screen. The above outputs this on TTY1: foo bar baz Attempt 2: %post --interpreter /bin/bash --log /root/postinstall.log echo "Changing output to TTY 3; press Alt-F3 to view" > /dev/tty1 exec 1>/dev/tty3 2>&1 #do stuff %end This outputs the data correctly to the screen, but logs nothing. It also has the curious side-effect of delaying the reboot for like 10 minutes after the script completes. |
udev rules assign same port name for a modem with 4 ttyUSB ports Posted: 11 May 2021 08:06 AM PDT I have a dlink DW-157 3g dongle. I am trying to assign the same port to the dongle everytime it boots up by modifying the udev rules file. Since the dongle on boot boots up as a storage media, I have to enter the command below to eject and mount for modem mode and then other command below it to make use of the ttyUSB ports of the modem for running a dial up modem. sudo eject /dev/sr0 sudo /bin/sh -c "echo 2001 7d0e > /sys/bus/usb-serial/drivers/option1/new_id After entering these, sudo dmesg| grep ttyUSB appears as: [ 17.581264] usb 1-1.4: GSM modem (1-port) converter now attached to ttyUSB1 [ 17.584470] usb 1-1.4: GSM modem (1-port) converter now attached to ttyUSB2 [ 17.593854] usb 1-1.4: GSM modem (1-port) converter now attached to ttyUSB3 [ 17.594869] usb 1-1.4: GSM modem (1-port) converter now attached to ttyUSB4 The actual port on which I can use the modem for dial up is ttyUSB1. So, I'm trying to assign ttyUSB1 to d_uart in my udev rules file: ACTION=="add", ATTRS{idVendor}=="2001", ATTRS{idProduct}=="7d0e", SYMLINK+="d_uart" But what happens is d_uart gets assigned to ttyUSB4. What do I do to assign it to the first port always (ttyUSB1 in this case) ? Also, the output of the command for ttyUSB1,ttyUSB2,ttyUSB3 and ttyUSB4 for the comnand below: udevadm info -a -n /dev/ttyUSB4 | grep '{serial}' | head -n1 is the same. ATTRS{serial}=="3f980000.usb" Also, output of command ls -l /dev/d_uart lrwxrwxrwx 1 root root 7 Oct 3 13:27 /dev/d_uart -> ttyUSB4 lsusb output: Bus 001 Device 006: ID 2001:7d0e D-Link Corp. Bus 001 Device 004: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root h Output of udevadm info -n /dev/ttyUSB2: P: /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.3/ttyUSB2/tty/ttyUSB2 N: ttyUSB2 S: d_uart S: serial/by-id/usb-D-Link_Inc_D-Link_DWM-157-if03-port0 S: serial/by-path/platform-3f980000.usb-usb-0:1.4:1.3-port0 E: DEVLINKS=/dev/d_uart /dev/serial/by-id/usb-D-Link_Inc_D-Link_DWM-157-if03-port0 /dev/serial/by-path/platform-3f980000.usb-usb-0:1.4:1.3-port0 E: DEVNAME=/dev/ttyUSB2 E: DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.3/ttyUSB2/tty/ttyUSB2 E: ID_BUS=usb E: ID_MODEL=D-Link_DWM-157 E: ID_MODEL_ENC=D-Link\x20DWM-157 E: ID_MODEL_ID=7d0e E: ID_PATH=platform-3f980000.usb-usb-0:1.4:1.3 E: ID_PATH_TAG=platform-3f980000_usb-usb-0_1_4_1_3 E: ID_REVISION=0300 E: ID_SERIAL=D-Link_Inc_D-Link_DWM-157 E: ID_TYPE=generic E: ID_USB_CLASS_FROM_DATABASE=Miscellaneous Device E: ID_USB_DRIVER=option E: ID_USB_INTERFACES=:020e00:0a0002:ff0201:ff0000:080650: E: ID_USB_INTERFACE_NUM=03 E: ID_USB_PROTOCOL_FROM_DATABASE=Interface Association E: ID_VENDOR=D-Link_Inc E: ID_VENDOR_ENC=D-Link\x2cInc\x20\x20 E: ID_VENDOR_FROM_DATABASE=D-Link Corp. E: ID_VENDOR_ID=2001 E: MAJOR=188 E: MINOR=2 E: SUBSYSTEM=tty E: TAGS=:systemd: E: USEC_INITIALIZED=978899 |
lxsession-logout vs lxde-logout Posted: 11 May 2021 10:06 AM PDT I'm new with lxde, I installed it on ubuntu 14.04 using sudo apt-get install lxde. But the problem I have is that I want to logout without a prompt. I've read some articles about this and they suggested one of these: lxsession-logout or lxde-logout . But they cannot be used to logout without a prompt. The question is what are the main differences between them? Is it possible to logout without a prompt on lxde? thanks. |
Limit POSIX find to specific depth? Posted: 11 May 2021 07:14 AM PDT I noticed recently that POSIX specifications for find do not include the -maxdepth primary. For those unfamiliar with it, the purpose of the -maxdepth primary is to restrict how many levels deep find will descend. -maxdepth 0 results in only command line arguments being processed; -maxdepth 1 would only handle results directly within the command line arguments, etc. How can I get the equivalent behavior to the non-POSIX -maxdepth primary using only POSIX-specified options and tools? (Note: Of course I can get the equivalent of -maxdepth 0 by just using -prune as the first operand, but that doesn't extend to other depths.) |
How to get the tty in which bash is running? Posted: 11 May 2021 08:37 AM PDT In the second method proposed by this page, one gets the tty in which bash is being run with the command: ps ax | grep $$ | awk '{ print $2 }' I though to myself that surely this is a bit lazy, listing all running processes only to extract one of them. Would it not be more efficient (I am also asking if this would introduce unwanted effects) to do: ps -p $$ | tail -n 1 | awk '{ print $2 }' FYI, I came across this issue because sometimes the first command would actually yield two (or more) lines. This would happen randomly, when there would be another process running with a PID that contains $$ as a substring. In the second approach, I am avoiding such cases by requesting the PID that I know I want. |
nix package manager: perl warning: Setting locale failed Posted: 11 May 2021 09:15 AM PDT Whenever I run a command for the nix package manager (e.g. nix-channel --update) I get the following warning: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "", LC_ALL = "en_US.UTF-8", LC_CTYPE = "en_US.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). I suspect it's somehow related to nix since other perl scripts don't show this behaviour (I tried perl -e exit and something using WWW::Curl). Changing the locale settings does reflect in the output of the warning, but the warning is still shown with every configuration I could think of. OS is openSUSE. What can I do? |
git pull from remote but no such ref was fetched? Posted: 11 May 2021 09:55 AM PDT I have a git mirror on my disk and when I want to update my repo with git pull it gives me error message: Your configuration specifies to merge with the ref '3.5/master' from the remote, but no such ref was fetched. It also gives me: 1ce6dac..a5ab7de 3.4/bfq -> origin/3.4/bfq fa52ab1..f5d387e 3.4/master -> origin/3.4/master 398cc33..1c3000a 3.4/upstream-updates -> origin/3.4/upstream-updates d01630e..6b612f7 3.7/master -> origin/3.7/master 491e78a..f49f47f 3.7/misc -> origin/3.7/misc 5b7be63..356d8c6 3.7/upstream-updates -> origin/3.7/upstream-updates 636753a..027c1f3 3.8/master -> origin/3.8/master b8e524c..cfcf7b5 3.8/misc -> origin/3.8/misc * [neuer Zweig] 3.8/upstream-updates -> origin/3.8/upstream-updates When I run make menuconfig it gives me Linux version 3.5.7? What does this mean? How can I update my repo? |
No comments:
Post a Comment