Is there a command to wait X seconds before next command? Posted: 30 Jun 2021 10:05 AM PDT Say I want to execute two commands but I want to wait X amount of seconds before the next one gets executed, what's the command for this? E.g. sudo dnf upgrade -y && [PAUSE X SECONDS] && shutdown -r |
“you are in emergency mode,, After logging in type ”journalctl -xb“ti view…” Posted: 30 Jun 2021 10:07 AM PDT my os (pop os , ubuntu based) gets freezed due to a RAM exceed, I reboot my system and only have CLI, were I got the tittle mentioned message: "you are in emergency mode,, After logging in type "journalctl -xb"ti view…" for getting to a solution I ran sudo blkid as mentioned in previous threads to see the UUIDs of the devices so I can edit certain lines in sudo vim /etc/fstab : https://www.enmimaquinafunciona.com/pregunta/135568/permitir-el-funcionamiento-en-modo-de-emergencia As I understand I must change "00" to "0 0" on the correponding line on the device wwhere I mounted my os, in my case: /dev/sdb3 this is what I got when running cat /etc/fstab /etc/tab static file system information. Use 'blkid' for print universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added or removed. See fstab(5) /dev/mapper/cryptswap none swap defaults 0 0 PARTUUID=7d9d3545-8064-4caf-9c02-c8ad5d8f1f92 /boot/efi vfat unmask=0077 0 0 UUID=4a53bf15-f3ba-49e6-a692-679166eab69c / ext4 noatime,errors=rmeount-ro 0 0 UUID= 0C80015D80014EA0 /mnt/data ntfs defaults 0 2 *blackened highlighted line correspond to my /root system as I see there is already a "0 0" instead of "00" on /dev/sdb3 UUID (where I have installed root system) if it helps, I have 2 GPU drivers on my system, an NVIDIA geFORCE MX330 and a intel i7-1065G7. so, how can I boot in normal mode ? |
Show Plot in Separate Window in Pycharm Professional Edition 2021.1.2 Posted: 30 Jun 2021 09:20 AM PDT I am a new Linux Ubuntu 20.04.2 LTS User. I want to view my plots created in Pycharm Professional 2021.1.2 (using matplotlib.pyplot.plot command) in a separate window. By default, the plot is getting displayed in the inbuilt sciview within the IDE. I want it to be displayed in a separate window for some reason. I did try unchecking 'show plots in tool window' in Settings >> Tools >> Python Scientific It worked in Windows 10 version of Pycharm. But in Ubuntu, I am getting the following error: Error: failed to send plot to http://127.0.0.1:63342 Traceback (most recent call last): File "/home/jenkins/Downloads/pycharm-professional-2021.1.2/pycharm-2021.1.2/plugins/python/helpers/pycharm_display/datalore/display/display_.py", line 60, in _send_display_message urlopen(url, buffer) File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.8/urllib/request.py", line 531, in open response = meth(req, response) File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response response = self.parent.error( File "/usr/lib/python3.8/urllib/request.py", line 569, in error return self._call_chain(*args) File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found NOTE: For the same code, I get the graph if I keep 'show plots in tool window' checked in Settings >> Tools >> Python Scientific. I am using Python 3.8.5 Please help me fix this. |
How to determine minimum supported OS version of my app? Posted: 30 Jun 2021 10:02 AM PDT Background I'm making an C++ app that's supposed to run on Centos 7. Builds are being done on Centos 7.9, and I'm using C++11 and C++17 features, which limits backward compatibility with older versions of Centos 7. There is an installer that can perform some checks before user is able to run the app. Question How can I automatically determine minimal supported version of Centos 7 for the app I'm developing? What i have currently Right now, after building, I'm retrieving a list of the required libraries using ldd and readelf (see: https://stackoverflow.com/questions/6242761/determine-direct-shared-object-dependencies-of-a-linux-binary ). With the list I'm able to compare what the user has to what my app needs, and if the user's version is lower than required, the installer tells him to upgrade. However, with this method my list tells me what are the versions of the libraries that I have on my build machine, not the oldest compatible versions of them. What i've thought I'd do was I'd downgrade my build machine to some version of Centos 7 and - if the app compiles - say that's the lowest supported one. The installer would then compare user's version of library to the generated list. That being said, I'd like to have some sort of promise that newer versions are backward-compatible with the version I'd be using, and I couldn't find anything like that on Centos' or Red Hat's webpages. There are also security concerns about using older, unsupported version of OS. Second option would be to only support whatever I have on my build machine, but that might require some users to upgrade, and they might not like it. Third option is to build the app and try to run it on an old version of Centos. Biggest part of my problem is lack of knowledge how to distribute Linux apps outside of the packet manager. On Windows you simply pack all DLLs with EXEs and you're good to go (most of the time). On Linux however you can't do that (I mean, you can mess with RPATH...). |
Screen resolution won't change no matter which way I try it Posted: 30 Jun 2021 10:09 AM PDT Hello, I'm using Manjaro GNOME in a VM and I wanted to change the screen resolution to 1080p. To achieve this I firstly used xrandr. Its a bit weird though, that the only output detected is "XWAYLAND0" and not "Virtual1". This is what I entered in the terminal: ~ $ cvt 1920 1080 # 1920x1080 59.96hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2567 1080 1083 1088 1120 -hsync +vsync ~ $ xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2567 1080 1083 1088 1120 -hsync +vsync ~ $ xrandr --addmode XWAYLAND0 1920x1080_60.00 But the screen resolution still does not show up in the display settings. What I tried next was changing the GRUB screen resolution as shown in a YouTube tutorial. I changed GRUB_GFXMODE in my /etc/default/grub to 1920x1080x32 and entered the command ~ $ sudo update-grub
But this also had no affect on my screen resolution. How do I change my screen resolution? |
How to start Ubuntu Desktop from CLI? Posted: 30 Jun 2021 09:19 AM PDT I'm using a Macbook Pro M1, and trying to set up an Ubuntu VM using UTM. I've managed to set up Ubuntu 20.04.2 LTS Server, but am having issues getting the Desktop / GUI part running. I've followed this guide to the letter - https://mac.getutm.app/gallery/ubuntu-20-04 - and managed to get the server installed fine. At the end of the guide, I'm told to do the following to install Ubuntu Desktop:- $ sudo apt install tasksel $ sudo tasksel install ubuntu-desktop $ sudo reboot That had no effect. I then found another guide here - https://linuxconfig.org/start-gui-from-command-line-on-ubuntu-20-04-focal-fossa - which says to run the following:- $ sudo systemctl isolate graphical $ sudo systemctl set-default graphical.target Again - zero effect. What am I missing? |
How to Ctrl+Z a Bash or loop? Posted: 30 Jun 2021 08:41 AM PDT If I have a for loop that executes a long running process with different arguments each time it runs, and then press Ctrl+Z , this suspends the process that is running. But, when I execute fg to continue the suspended process, the loop exits after the process has finished, instead of continuing with the next run ? Example: ~$ ls very_long_1.mp3 very_long_2.mp3 very_long_3.mp3 very_long_4.mp3 very_long_5.mp3 very_long_6.mp3 ~$ for mp3 in *.mp3; do mplayer $mp3; done [mplayer plays very_long_1.mp3] [mplayer plays very_long_2.mp3] [mplayer plays very_long_3.mp3] ^Z [while mplayer is still playing very_long_3.mp3] ~$ sleep 1m; fg [mplayer continues playing very_long_3.mp3] ~$ After playing very_long_3.mp3 , I expect the 3 yet unplayed files to be played - but that never occurs. Why does this happen, and how can I suspend the whole for loop while being able to execute commands like I am when pressing Ctrl+Z ? |
CentOS: Connections limit per port for Redis Posted: 30 Jun 2021 08:28 AM PDT I am using Redis caching server on CentOS, where Redis accepts clients connections on the configured listening TCP port. I am trying to figure out the limits applied by the operating system to the number of connection allowed to the single configured port for redis. The user being used is root as shown: [root@server-001]# ps -ef | grep -i redis root 19595 1 9 Jun26 ? 09:43:07 /usr/local/bin/redis-server 0.0.0.0:6379 Now I am tricked by multiple factors: 1st: the value of file-max is: [root@server-001]# cat /proc/sys/fs/file-max 6518496 2nd: the value of limits.conf : [root@server-001]# cat /etc/security/limits.d/20-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 4096 root soft nproc unlimited 3rd: The soft and hard limit of file descriptors: [root@server-001]# ulimit -Hn 4096 [root@server-001]# ulimit -Sn 1024 Now, knowing that the the real factor limits connection to a single port is file descriptors, which one I have to change to make sure that the redis server is accepting as much clients as it is possible? |
How to prevent VxXsrv from copying to clipboard on selection? Posted: 30 Jun 2021 08:00 AM PDT I am using VcXsrv to view remote Linux' windows on Windows :) I am running it with "C:\Program Files\VcXsrv\vcxsrv.exe" -ac -terminate -lesspointer -multiwindow -clipboard -wgl -xkblayout us,ru -xkboptions grp:rctrl_rshift_toggle unfortunately, it behaves unwonted for Windows users: it copies to clipboard upon any selection. Can it be disabled / reconfigured to copy only by explicit command? |
export env variable does not work from Makefile Posted: 30 Jun 2021 08:49 AM PDT We have below task in Makefile : test: export SOME_ENV=someTest go test -tags=integration -v -race ./tests/integrationtest/... on shell prompt, SOME_ENV is set and the next command(go test ) internally picks .someTest.env file $ export SOME_ENV=someTest $ go test -tags=integration -v -race ./tests/integrationtest/... but the Makefile approach doesn't work Why environment variable is not set using Makefile approach? Note: we have another tasks in Makefile that should not be influence with this export |
Why integer division is faster than bitwise shift in shell? Posted: 30 Jun 2021 08:13 AM PDT I'm comparing performance of bash and dash (default sh in Xubuntu 18.04). - I expect
sh to be faster than bash - I expect bitwise shift to be faster than division operator.
However, I'm getting inconsistencies: λ hyperfine --export-markdown a.md -w 3 ./* Benchmark #1: ./calc-div.bash Time (mean ± σ): 2.550 s ± 0.033 s [User: 2.482 s, System: 0.068 s] Range (min … max): 2.497 s … 2.595 s 10 runs Benchmark #2: ./calc-div.sh Time (mean ± σ): 2.063 s ± 0.016 s [User: 2.063 s, System: 0.000 s] Range (min … max): 2.043 s … 2.100 s 10 runs Benchmark #3: ./calc-shift.bash Time (mean ± σ): 3.312 s ± 0.034 s [User: 3.255 s, System: 0.057 s] Range (min … max): 3.274 s … 3.385 s 10 runs Benchmark #4: ./calc-shift.sh Time (mean ± σ): 2.087 s ± 0.046 s [User: 2.086 s, System: 0.001 s] Range (min … max): 2.058 s … 2.211 s 10 runs Summary './calc-div.sh' ran 1.01 ± 0.02 times faster than './calc-shift.sh' 1.24 ± 0.02 times faster than './calc-div.bash' 1.61 ± 0.02 times faster than './calc-shift.bash' Command | Mean [s] | Min [s] | Max [s] | Relative | ./calc-div.bash | 2.550 ± 0.033 | 2.497 | 2.595 | 1.24 ± 0.02 | ./calc-div.sh | 2.063 ± 0.016 | 2.043 | 2.100 | 1.00 | ./calc-shift.bash | 3.312 ± 0.034 | 3.274 | 3.385 | 1.61 ± 0.02 | ./calc-shift.sh | 2.087 ± 0.046 | 2.058 | 2.211 | 1.01 ± 0.02 | Here are the scripts I tested: calc-div.bash #!/usr/bin/env bash for i in {1..1000000}; do _=$(( i / 1024 )) done calc-div.sh i=1 while [ $i -le 1000000 ]; do _=$(( i / 1024 )) i=$(( i + 1 )) done calc-shift.bash for i in {1..1000000}; do _=$(( i >> 10 )) done calc-shift.sh #!/usr/bin/env sh i=1 while [ $i -le 1000000 ]; do _=$(( i >> 10 )) i=$(( i + 1 )) done This difference is more visible for 5000000: Command | Mean [s] | Min [s] | Max [s] | Relative | ./calc-div.bash | 13.333 ± 0.202 | 12.870 | 13.584 | 1.23 ± 0.02 | ./calc-div.sh | 10.830 ± 0.119 | 10.750 | 11.150 | 1.00 | ./calc-shift.bash | 17.361 ± 0.357 | 16.995 | 18.283 | 1.60 ± 0.04 | ./calc-shift.sh | 11.226 ± 0.351 | 10.834 | 11.958 | 1.04 ± 0.03 | Summary './calc-div.sh' ran 1.04 ± 0.03 times faster than './calc-shift.sh' 1.23 ± 0.02 times faster than './calc-div.bash' 1.60 ± 0.04 times faster than './calc-shift.bash' As you can see, for both bash and dash , division operator is faster than equivalent bitwise-shift to the right. |
Remarkable tablet: Convert text file into notebook page Posted: 30 Jun 2021 09:52 AM PDT I am using the Remarkable 2 tablet with its note-taking functionality. I can scribble new notebook pages or annotate pdf-documents with the tools on it. I would, however, like to import existing text files (mainly txt) that I can then treat as if I had scribbled them (cut parts, move them around, erase single words, etc.). Is there a conversion tool for Remarkable files that can convert text files into the format Remarkable uses for its notebook pages? Or could I batch-modify my text files so that Remarkable reads one as notebook page? |
How to ignore part of a filename Posted: 30 Jun 2021 08:00 AM PDT Sorry if this question has been asked before. I am new to all of this. I would like to concatenate all files from different folders that contain R1 at a specific position in their filenames. My attempts so far are not working as some file names have a different S number. Folder 1 952_56890_S91_combined_L001_R1_001.fastq.gz 952_56890_S91_combined_L001_R2_001.fastq.gz 952_53929_S92_combined_L001_R1_001.fastq.gz 952_53929_S92_combined_L001_R2_001.fastq.gz Folder 2 952_56890_S125_combined_L001_R1_001.fastq.gz 952_56890_S125_combined_L001_R2_001.fastq.gz 952_53929_S126_combined_L001_R1_001.fastq.gz 952_53929_S126_combined_L001_R2_001.fastq.gz |
Can't find installed softwares with snap on Fedora Posted: 30 Jun 2021 08:05 AM PDT I tried to install PyCharm on Fedora: [ac@fedora ~]$ sudo snap install pycharm-community --classic 2021-06-30T14:26:23+02:00 INFO Waiting for automatic snapd restart... pycharm-community 2021.1.2 from jetbrains✓ installed But I wasn't able to launch it: [ac@fedora ~]$ pycharm bash: pycharm: command not found... [ac@fedora ~]$ pycharm-community bash: pycharm-community: command not found... I wasn't able to find it in the menu with the windows command ... So how do you find installed software on Fedora? Should I have done it a different way? snap run doesn't work for every software? I tried to do the same with [ac@fedora Downloads]$ sudo dnf install mysql-workbench-community-8.0.25-1.fc34.src-1.rpm [sudo] password for ac: Last metadata expiration check: 1:28:28 ago on Wed 30 Jun 2021 03:23:29 PM CEST. Package mysql-workbench-community-8.0.25-1.fc34.src is already installed. Dependencies resolved. Nothing to do. Complete! [ac@fedora Downloads]$ snap run mysql-workbench-community error: cannot find current revision for snap mysql-workbench-community: readlink /var/lib/snapd/snap/mysql-workbench-community/current: no such file or directory |
how can i print the values from the dcn file after specific pattern found in linux Posted: 30 Jun 2021 09:58 AM PDT I have input like this, [Data.11] Store,100,,,,,,,,,,,,,,,,,,,5222 Store,101,,,,,,,,,,,,,,,,,,,5235 [Data.12] TaxSchedulePt,5899,2,110.0100,99999999.99,,8.8750 TaxSchedulePt,5900,1,0,110.00,,0.0000 [Data.13] TaxSchedulePt,5900,1,0,110.00,,0.0000 TaxSchedulePt,5900,2,110.0100,99999999.99,,8.8750 first need to find the [Data.] has found in the given input file, If [Data.] found in the given input file need to write that specific [Data.*] values into separate file. Expected output file for[Data.11] Store,100,,,,,,,,,,,,,,,,,,,5222 Store,101,,,,,,,,,,,,,,,,,,,5235 Expected output file for[Data.12] TaxSchedulePt,5899,2,110.0100,99999999.99,,8.8750 TaxSchedulePt,5900,1,0,110.00,,0.0000 Expected outputfile for [Data.13] TaxSchedulePt,5900,1,0,110.00,,0.0000 TaxSchedulePt,5900,2,110.0100,99999999.99,,8.8750 And this is i have tried, filename=$1 Var1=Data.18 if grep -wq "$Var1" $filename ; then awk '$1 ~ /Data[.]18/' > /ttk/new/data.dcn else echo "not Worked" fi |
How to play a playlist continously? Posted: 30 Jun 2021 09:21 AM PDT I have a video file which is 20 seconds long. I cut this video file into segments like video_file_0 -> starts at 0:00, ends at 0:02 video_file_1 -> starts at 0:02, ends at 0:04 video_file_2 -> starts at 0:04, ends at 0:06 video_file_3 -> starts at 0:06, ends at 0:08 video_file_4 -> starts at 0:08, ends at 0:10 video_file_5 -> starts at 0:10, ends at 0:12 video_file_6 -> starts at 0:12, ends at 0:14 video_file_7 -> starts at 0:14, ends at 0:16 video_file_8 -> starts at 0:16, ends at 0:18 video_file_9 -> starts at 0:18, ends at 0:20 So my question is, how can I play these video files continously in a single window exactly like playing the whole video file from 0:00 to 0:20, without closing and reopening windows in every switching between video files. Can I use ffplay, ffmpeg or vlcj for this functionality? I tried find -type f -name "video_file_*" | while read f; do ffplay -autoexit -- "$f"; done But this code closes and reopens the window between every video file, I don't want that. How can I do that? EDIT: I am building a Java project which the streams are shown inside the JFrame. So I want this functionality is shown inside the JFrame. |
Why system go to suspend not power off? Posted: 30 Jun 2021 10:13 AM PDT When I press the power button on a machine, I got log: PM: suspend entry (s2idle) PM: Syncing filesystems ... done. Freezing user space processes ... (elapsed 0.003 seconds) done. Is it suspending? not powering off? which setting can change it? |
how to trigger an ansible task only if concerned directories are old enough (+30 days for example) Posted: 30 Jun 2021 08:52 AM PDT how to trigger an ansible task only if concerned directories are old enough (+30 days for example) ? whant to do something like - name: backup biggest files #get difference between currentdate & last backup register age I know I can get either a string or a defined on when clause but I don't know how to here my goal is /mnt/backup.YYYYMMDD is older than 30days for example to make a list of tasks to create new dated directory and do the backup itself (synchronize method might be good ?) how can I get this ? steps to get : - name: Check the last backup date shell: | #or find module register: lastone - name: Get current date for arithmetics shell: | echo $(date +%s) register: currentdate - name: find ideal path to create new backup if last one is too old # define & create new directory if currentdate - lastone is over a numeric value (suffisant difference) when: " {{ currentdate | int - lastone | int }} " > 40000 here I finished to get a dummy start of solution pig style for testing : --- - hosts: localhost become: true become_method: sudo become_user: francois tasks: - name: Check the last backup date shell: | date +%s -r $(find /mnt{1,2,3}/ -type d -name "backup.*[0-9]" 2> /dev/null | sort | tail -1) args: executable: /bin/bash register: lastone - name: Get current date for arithmetics shell: | date +%s register: currentdate - set_fact: difference: " {{ currentdate.stdout | int - lastone.stdout | int }} " - name: find ideal path to create new backup if last one is too old shell: | find /mnt{1,2,3}/ -type d -name "backup.*[0-9]" 2> /dev/null | sort -n | tail -1 | sed "s/\.[0-9].*/\.$(date +%Y%m%d)/" args: executable: /bin/bash register: rep when: - difference | int > 4000 - name: create path file: path: "{{ rep.stdout }}" state: directory mode: "0755" when: - rep is defined - difference | int > 4000 that results well creating the backup.20210630 directory wherever it is mounted behind /mnt1 or 2 or 3 (here 3) 👨francois@💻zaphod🐙:~/GITLAB/dev/dev_ansible_serviceatonce$ ANSIBLE_NOCOWS=1 ansible-playbook -i inventory/hosts roles/filebackup/filebackup.yaml PLAY [localhost] ***************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************** ok: [localhost] TASK [Check the last backup date] ************************************************************************************************************************************************************ changed: [localhost] TASK [Get current date for arithmetics] ****************************************************************************************************************************************************** changed: [localhost] TASK [set_fact] ****************************************************************************************************************************************************************************** ok: [localhost] TASK [find ideal path to create new backup if last one is too old] *************************************************************************************************************************** changed: [localhost] TASK [create path] *************************************************************************************************************************************************************************** changed: [localhost] PLAY RECAP *********************************************************************************************************************************************************************************** localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 👨francois@💻zaphod🐙:~/GITLAB/dev/dev_ansible_serviceatonce$ ls -d /mnt3/backup.202106* /mnt3/backup.20210604 /mnt3/backup.20210610 /mnt3/backup.20210615 /mnt3/backup.20210621 /mnt3/backup.20210629 /mnt3/backup.20210630 👨francois@💻zaphod🐙:~/GITLAB/dev/dev_ansible_serviceatonce$ |
Trying to start a .sh on boot Posted: 30 Jun 2021 10:21 AM PDT i try to start a "start.sh" file when the Server starts. Some info: - its a headless vServer
- i tried crontab but that didn work either
- my current approach is systemctl command
- when i login the user is root
- its a Ubuntu server
Content of my systemctl file [Unit] Description=MCServer Start After=multi-user.target [Service] Type=simple ExecStart=/home/mcs/start.sh [Install] WantedBy=multi-user.target Contend of the start.sh screen -S minecraft java -Xms1024M -Xmx1024M -jar /home/mcs/server.jar When i check the status after reboot the system replies with this: ● mcs.service - MCServer Start Loaded: loaded (/etc/systemd/system/mcs.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2021-06-30 12:12:16 CEST; 32min ago Process: 700 ExecStart=/home/mcs/start.sh (code=exited, status=203/EXEC) Main PID: 700 (code=exited, status=203/EXEC) Jun 30 12:12:16 v25431 systemd[1]: Started MCServer Start. Jun 30 12:12:16 v25431 systemd[700]: mcs.service: Failed to execute command: Permission denied Jun 30 12:12:16 v25431 systemd[700]: mcs.service: Failed at step EXEC spawning /home/mcs/start.sh: Permission denied Jun 30 12:12:16 v25431 systemd[1]: mcs.service: Main process exited, code=exited, status=203/EXEC Jun 30 12:12:16 v25431 systemd[1]: mcs.service: Failed with result 'exit-code'. root@v25431:~# I see that im missing permission ... how do i fix that? a little headsup... im pretty new to linux so, explain it to me like im 5 or 80 years old ^^ Edit: When i change the systemctl to: [Unit] Description=MCServer Start After=multi-user.target [Service] Type=simple ExecStart=sh /home/mcs/start.sh [Install] WantedBy=multi-user.target the output changes to ● mcs.service - MCServer Start Loaded: loaded (/etc/systemd/system/mcs.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2021-06-30 12:59:08 CEST; 22s ago Process: 1324 ExecStart=/usr/bin/sh /home/mcs/start.sh (code=exited, status=1/FAILURE) Main PID: 1324 (code=exited, status=1/FAILURE) Jun 30 12:59:08 v25431 systemd[1]: Started MCServer Start. Jun 30 12:59:08 v25431 sh[1325]: Must be connected to a terminal. Jun 30 12:59:08 v25431 systemd[1]: mcs.service: Main process exited, code=exited, status=1/FAILURE Jun 30 12:59:08 v25431 systemd[1]: mcs.service: Failed with result 'exit-code'. Requested Output root@v25431:~# ls -l /home/mcs/start.sh -rwxr-xr-x 1 root root 72 Jun 29 23:39 /home/mcs/start.sh |
powershell update is causing some bash scripts to fail which works fine directly from powershell Posted: 30 Jun 2021 10:09 AM PDT The call instaloader --post-filter="likes > 16720 and not is_video and date_utc >= datetime(2021, 5, 29)" --no-profile-pic --no-videos --count=12 cristiano works fine. But put the same code in test.sh and then run it from powershell like this sh .\test.sh I get error like this usage: instaloader [--comments] [--geotags] [--stories] [--highlights] [--tagged] [--igtv] [--login YOUR-USERNAME] [--fast-update] profile | "#hashtag" | %location_id | :stories | :feed | :saved instaloader --help instaloader: error: unrecognized arguments: cristiano In case you want to test it out firsthand just install instaloader with pip install instaloader==4.7.1 PS: The same script used to work fine from my .sh -files as well few days back, maybe some update to my powershell is causing the issue. The issue I guess is in its ability to escape spaces within quotes. I am on bash 4.4. I don't remember what it was before reinstalling powershell . And no WSL is being used. |
Is there a way to give a user access to a file without adding it to sudoers? Posted: 30 Jun 2021 08:51 AM PDT My monitoring system runs with lowered permissions but I want it to run a command that needs access a particular file in a folder that the monitoring system is not allowed to enter. The usual approach would be for me to add that command–user combination to my sudoers file and change the configuration such that the command is executed with sudo. However, I feel that this will give the command in fact many more rights than are needed (I don't need any write permissions at all). Is there a sudo-like program that can execute a command as the same user but with the only difference that a particular (pre-defined) file or folder is then accessible? (The file/folder could even have a different name.) Could there be a way to accomplish this with mount namespaces? |
Requirements to boot a rootfs.img Posted: 30 Jun 2021 09:02 AM PDT I have a Linux image from which I've extracted a "rootfs.img" file, the file seems to contain files and information, I would like to "boot into it" file contents: bin dev etc home lib media mnt proc run sbin service sys tmp usr var lib64 how can I install a bootloader to boot it? I've tried: getting a live ubuntu image, adding a partition, placing the contents of rootfs.img inside the partition and running "Boot Repair", which did find the other system as another bootable linux, but I can't boot into it, seems like I'm missing something |
Exact pattern match in awk Posted: 30 Jun 2021 10:17 AM PDT I have a listA which looks like this: gene1 gene2 gene11 gene22 gene23 I also have few tab delimited text files(*hist.txt) whose 4th column matches the genes in the list. I want to extract value of every gene in the listA from tab delimited text file This is what I have written for i in `cat listA.txt` do for a in *hist.txt do fn=${a%%_*} cat $a | awk -v OFS="\t" -v fn="$fn" -v pattern="$i" '$4 ~ pattern{print fn,$0}' >> ${i}_out.txt done done My pattern match fails in awk not doing exact pattern match. The output for gene1 also include gene11. |
Unplugging external monitor forces me to reboot Posted: 30 Jun 2021 09:48 AM PDT I use a script that runs xrandr to turn off laptop display, showing only the external monitor screen. I have another script that basically "reverses" this (showing only laptop display) which I use every time I have to take my laptop to a coffee shop. Sometimes, in a hurry, I don't do this and just unplug my laptop directly before putting it in the bag. Now when I reach the coffee shop and try to use it, I would only see a blank screen which is due to what I assume to be my video configuration still being in "external screen only" mode. Question: how do I go to "laptop screen only" mode here? Otherwise, is there a way I can avoid having this problem in the first place? Without knowing this I simply force reboot my laptop, which sucks. For the record I run NixOS on Thinkpad P71. |
How to install Nvidia Drivers in Manjaro 18? Posted: 30 Jun 2021 10:03 AM PDT Today I installed the latest version of Manjaro i3 18.0.3 to my MSI GE-60PC laptop which has GeForce GTX 850M. I'm using my laptop with external Monitor (connected via HDMI). After first boot, my laptop screen and external monitor were mirrored. I wasn't able to set my display settings. Then I installed the nvidia package1 using pacman with the following command. sudo pacman -S linux419-nvidia My kernel version is; Linux my-msi 4.19.28-1-MANJARO #1 SMP PREEMPT Sun Mar 10 08:32:42 UTC 2019 x86_64 GNU/Linux In 1, it says; 5. Reboot. The nvidia package contains a file which blacklists the nouveau module, so rebooting is necessary. So, i rebooted the pc after installing nvidia drivers. After reboot, my display settings was fixed, so my monitor was extended. Although, when I run the following command; lspci -k | grep -A 2 -E "(VGA|3D)" 0:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) DeviceName: Onboard IGD Subsystem: Micro-Star International Co., Ltd. [MSI] 4th Gen Core Processor Integrated Graphics Controller -- 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 850M] (rev a2) Subsystem: Micro-Star International Co., Ltd. [MSI] GM107M [GeForce GTX 850M] Kernel driver in use: nouveau It says I'm still using nouveau driver. What is the proper and correct way to install nvidia drivers? I want to install cuda and pytorch after that. When I run nvidia-modprobe command, the following kernel log occur. [ 1883.794671] nvidia-nvlink: Nvlink Core is being initialized, major device number 237 [ 1883.795017] NVRM: The NVIDIA probe routine was not called for 1 device(s). [ 1883.795018] NVRM: This can occur when a driver such as: NVRM: nouveau, rivafb, nvidiafb or rivatv NVRM: was loaded and obtained ownership of the NVIDIA device(s). [ 1883.795018] NVRM: Try unloading the conflicting kernel module (and/or NVRM: reconfigure your kernel without the conflicting NVRM: driver(s)), then try loading the NVIDIA kernel module NVRM: again. [ 1883.795018] NVRM: No NVIDIA graphics adapter probed! [ 1883.795132] nvidia-nvlink: Unregistered the Nvlink Core, major device number 237 |
Security of bash script involving gpg symmetric encryption Posted: 30 Jun 2021 10:24 AM PDT Notice: the very same vulnerability has been discussed in this question, but the different setting of the problem (in my case I don't need to store the passphrase) allows for a different solution (i.e. using file descriptors instead of saving the passphrase in a file, see ilkkachu's answer). Suppose I have a symmetrically encrypted file my_file (with gpg 1.x), in which I store some confidential data, and I want to edit it using the following script: read -e -s -p "Enter passphrase: " my_passphrase gpg --passphrase $my_passphrase --decrypt $my_file | stream_editing_command | gpg --yes --output $my_file --passphrase $my_passphrase --symmetric unset my_passphrase Where stream_editing_command substitutes/appends something to the stream. My question: is this safe? Will the variable $my_passphrase and/or the decrypted output be visible/accessible in some way? If it isn't safe, how should I modify the script? |
How do I get the fingerprint of an ASCII-armored PGP secret key with gpg? Posted: 30 Jun 2021 08:46 AM PDT I have a file secret.asc containing an ASCII-armored (i.e., plain text and starts with -----BEGIN PGP PRIVATE KEY BLOCK----- ) PGP/GPG secret/private key, and I would like to know its 40-character key fingerprint without importing it into my GPG keyring. Unfortunately, not a single command I've tried has surrendered that information to me. What I've Tried The following failed attempts were run on Ubuntu Xenial 16.04.5 with gpg version 1.4.20 and gpg2 version 2.1.11. The key in question was created solely for experimentation purposes and won't be used in anything, so I don't care if the output reveals too much about it. $ gpg --with-fingerprint secret.asc sec 2048R/161722B3 2018-09-12 uid Testing <testing@testing.nil> Short key ID only, no fingerprint. $ gpg2 --with-fingerprint secret.asc gpg: DBG: FIXME: merging secret key blocks is not anymore available gpg: DBG: FIXME: No way to print secret key packets here Error. $ gpg --with-fingerprint --no-default-keyring --secret-keyring ./secret.asc --list-secret-keys gpg: [don't know]: invalid packet (ctb=2d) gpg: keydb_search_first failed: invalid packet Error. $ gpg2 --with-fingerprint --no-default-keyring --secret-keyring ./secret.asc --list-secret-keys /home/jwodder/.gnupg/pubring.gpg -------------------------------- ... This lists the secret keys in my keyring for some reason. $ gpg --dry-run --import -vvvv secret.asc gpg: using character set `utf-8' gpg: armor: BEGIN PGP PRIVATE KEY BLOCK gpg: armor header: Version: GnuPG v1 :secret key packet: version 4, algo 1, created 1536783228, expires 0 skey[0]: [2048 bits] skey[1]: [17 bits] skey[2]: [2047 bits] skey[3]: [1024 bits] skey[4]: [1024 bits] skey[5]: [1021 bits] checksum: 386f keyid: 07C0845B161722B3 :signature packet: algo 1, keyid 07C0845B161722B3 version 4, created 1536783228, md5len 0, sigclass 0x1f digest algo 2, begin of digest b6 12 hashed subpkt 2 len 4 (sig created 2018-09-12) hashed subpkt 12 len 22 (revocation key: c=80 a=1 f=9F3C2033494B382BEF691BB403BB6744793721A3) hashed subpkt 7 len 1 (not revocable) subpkt 16 len 8 (issuer key ID 07C0845B161722B3) data: [2048 bits] :user ID packet: "Testing <testing@testing.nil>" :signature packet: algo 1, keyid 07C0845B161722B3 version 4, created 1536783228, md5len 0, sigclass 0x13 digest algo 2, begin of digest 33 ee hashed subpkt 2 len 4 (sig created 2018-09-12) hashed subpkt 27 len 1 (key flags: 03) hashed subpkt 9 len 4 (key expires after 32d3h46m) hashed subpkt 11 len 5 (pref-sym-algos: 9 8 7 3 2) hashed subpkt 21 len 5 (pref-hash-algos: 8 2 9 10 11) hashed subpkt 22 len 3 (pref-zip-algos: 2 3 1) hashed subpkt 30 len 1 (features: 01) hashed subpkt 23 len 1 (key server preferences: 80) subpkt 16 len 8 (issuer key ID 07C0845B161722B3) data: [2046 bits] gpg: sec 2048R/161722B3 2018-09-12 Testing <testing@testing.nil> gpg: key 161722B3: secret key imported gpg: pub 2048R/161722B3 2018-09-12 Testing <testing@testing.nil> gpg: writing to `/home/jwodder/.gnupg/pubring.gpg' gpg: using PGP trust model gpg: key 793721A3: accepted as trusted key gpg: key 161722B3: public key "[User ID not found]" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) gpg: secret keys read: 1 gpg: secret keys imported: 1 The only fingerprint to be found is that of the revocation key. $ gpg2 --dry-run --import -vvvv secret.asc Same output as above. $ gpg --list-packets secret.asc $ gpg2 --list-packets secret.asc Basically the same output as the --dry-run --import -vvvv commands, only without the gpg: lines. |
Why is my udev rule not working? Posted: 30 Jun 2021 08:08 AM PDT I need to automatically run my script /var/www/html/configWWW when any USB is plugged in to my Raspberry. UDEV RULE - /etc/udev/rules.d/myRule.rules ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="****", ATTR{idProduct}=="****", RUN+="/var/www/html/configWWW" MY SCRIPT - /var/www/html/configWWW #!/bin/bash file="/media/pi/USB/SymSif.xml" if [ -f "$file" ] then ( echo "it works: $(date)" >> /home/pi/Desktop/test.txt ) else ( echo "it does not works: $(date)" >> /home/pi/Desktop/test.txt ) fi On the other hand, if i run script from bash /var/www/html/configWWW , it works! Why doesn't my udev rule work like my bash command? |
ClamAV: suppress all output except summary Posted: 30 Jun 2021 08:47 AM PDT When using Clam AntiVirus from within GNU Bash, how should one invoke clamscan such that it will reliably suppress all output except the final summary? These attempts don't work: clamscan --quiet . Suppresses the final summary. clamscan -o -r ~/ 2>/dev/null . Prints lines that aren't "OK" (but which don't necessarily indicate an infection: e.g. files that are simply empty files, or symbolic links) to stdout . Those lines therefore bypass the redirect and are still printed on the terminal in addition to the final summary. |
No comments:
Post a Comment