Friday, April 30, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


Three short beeps asrock mobo not booting tried new ram

Posted: 30 Apr 2021 10:25 AM PDT

I'm trobleshooting an old asrock motherboard that I found on the dump.

It's an asrock g41m vs3 motherboard that I brought a new ram stick to test on, a trascend 1066 mhz ddr3 1GB ram.

After installing the new ram I get three short beeps that repeats continously, this indicates bad ram module but the ram is totally new.

I swapped ram slots, reseated, reseted the bios even tried a post card but nothing seems to work.

Bad ram slots? Bus problem?

Btw the cmos battery I'm using is a bit old, I measured it's voltage and it reads almost 1v, I think this shouldn't be the problem.

Computer freezes on display disconnect or power on

Posted: 30 Apr 2021 10:15 AM PDT

Using Manjaro Linux, kernel 5.10.32-1, NVidia RTX GPU, propertiary NVidia drivers, Ryzen 3900 CPU. This does not happen when running Windows 10.

The issue is with the system freezing when I power on the display while the Linux system is running. It works fine as long as the display is off. I can ssh into it, I can use the keyboard to (blindly) switch to a console and issue a reboot. When I tap numlock/capslock, their LEDS on the keyboard change state until I power on the display, at which point the system becomes unresponsive.

If I turn the display off, wait a few seconds for it to properly power down (it has some sort of soft-off feature), then power it back on, the computer will freeze.

The freeze also happens when I unplug the display's HDMI connector. Doesn't happen on Windows, so I don't think it's electrical or a hardware issue.

I disabled all screen locking and screen blanking/power down settings in the Linux system.

The system runs fine when idle or under load. It can also wake up from sleep, although it will freeze if the TV was off during wake up and is turned on after. This is not an issue with entering/waking from a power saving mode.

Please help me diagnose and fix this issue.

using the less command to read files contained in the my_school olleh folder

Posted: 30 Apr 2021 10:04 AM PDT

Use the less command to read the file contained in the my_school folder. Enter the content of the file here: *

root@ip-10-251-17-152:~# less my_school olleh  my_school is a directory  root@ip-10-251-17-152:~#   

What could be the problem? am not getting to read the files contained within the folder.

tcpdump to perform wireless capture with advanced filters

Posted: 30 Apr 2021 10:15 AM PDT

I'm trying to use tcpdump to perform realtime wireless captures (without the option to save the file) on a MAC. My wireless interface is en0. I want to use similar filters as I would in Wireshark. My exact goal is to capture channel utilization and client count, but filter on a condition (great than or less than). The way I would do this in Wireshark is with the following filters:

Channel utilization:    wlan.bssid == 11:22:33:44:55:66 && wlan.qbss.cu > 50    Client count:    wlan.bssid == 11:22:33:44:55:66 && wlan.qbss.scount == 7  

How can I do the same with tcpdump?

The closest I found so far was another post here.

Note: Using tshark is not an option as I want to use this in a script and it needs to be exportable to systems that don't have Wirehshark installed.

Tomcat & httpd - tomcat no response anymore

Posted: 30 Apr 2021 09:04 AM PDT


I installed Apache Tomcat 9.0.41 a few months ago on CentOS 7 (and it was working fine the whole time). Now I installed httpd on the same server (as a proxy for multiple domains + SSL). httpd proxy seems to work fine - it is redirecting requests to the right servers. But tomcat does not work anymore.
Tomcat - 8080
httpd - 80, 443
Tomcat and httpd run under different non-privileged users. No errors in log files. It makes no difference if httpd is running or not.

When I run curl localhost:8080 - waiting indefinitely. Telnet works.

This is the default connector settings from Tomcat conf/server.xml

Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"

I also tried uncomment protocol="AJP/1.3" - no luck.

Connector protocol="AJP/1.3" address="::1" port="8009" redirectPort="8443"

I guess it is just a configuration problem, but I have no idea what to change. SSL problem?
Thank you.

Pop os dual boot issues

Posted: 30 Apr 2021 09:23 AM PDT

I tried to dual boot Pop!_OS 20.04 LTS. It already had Windows in it. So I created a partition and formatted it to ext4. After successful installation I tried to switch to Windows. Then I selected the partition with Windows in the boot menu but instead of windows, pop showed up. I tried to use the Windows Media Creation Tool to repair it, but while trying to boot into the pen drive's UEFI partition, I would still boot into Pop OS itself.

Running fdisk -l outputs:

Disk /dev/sda: 223.58 GiB, 240057409536 bytes, 468862128 sectors  Disk model: KINGSTON SA400S3  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: dos  Disk identifier: 0xc241bdfe    Device     Boot     Start       End   Sectors   Size Id Type  /dev/sda1  *      1026048 203700092 202674045  96.7G  7 HPFS/NTFS/exFAT  /dev/sda2       203700224 204797951   1097728   536M 27 Hidden NTFS WinRE  /dev/sda3       204800000 468858879 264058880 125.9G 83 Linux  

The Boot Mode is set to: auto, so it should be able to boot both UEFI and legacy.

efibootmgr -v outputs:

EFI variables are not supported on this system.  

Did I screw up my BIOS or my EFI? How do I fix this ? Please give any possible solutions. Thanx!

why authentication based on certificates is not working on this ssh server?

Posted: 30 Apr 2021 08:35 AM PDT

I have a serie of servers that provides ssh access with user certificates. All of them is working fine except one of them.

I've examined debug logs on both the failing server and a working server, and I've found these relevant different lines:

debug1: list_hostkey_types: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]    ---    debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none [preauth]  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none [preauth]  ---  debug1: rekey after 134217728 blocks [preauth]  ---  debug1: rekey after 134217728 blocks [preauth]  ---  debug1: userauth_pubkey: test pkalg ecdsa-sha2-nistp256-cert-v01@openssh.com pkblob ECDSA-CERT SHA256:Vxxxxxxxxxxxxxxxxxxx+RAijZUftE CA RSA SHA256:+pO/cl3KD+TLhrd991y/xxxxxxxxxxxxxxxxxxxqFKLQ [preauth]  debug1: temporarily_use_uid: 1001/1001 (e=0/0)  debug1: trying public key file /home/mister/.ssh/authorized_keys  debug1: Could not open authorized keys '/home/mister/.ssh/authorized_keys': No such file or directory  debug1: restore_uid: 0/0  debug1: temporarily_use_uid: 1001/1001 (e=0/0)  debug1: trying public key file /home/mister/.ssh/authorized_keys2  debug1: Could not open authorized keys '/home/mister/.ssh/authorized_keys2': No such file or directory  debug1: restore_uid: 0/0  Failed publickey for mister from 192.168.43.47 port 58770 ssh2: ECDSA-CERT SHA256:Vxxxxxxxxxxxxxxxxxxx+RAijZUftE ID mister (serial 2000) CA RSA SHA256:+pO/cl3KD+TLhrd991y/xxxxxxxxxxxxxxxxxxxqFKLQ  

What configuration I am missing on this server?

Prevent direct access to internal URLs

Posted: 30 Apr 2021 08:35 AM PDT

I have server1 (Javascript application running on AWS EC2 with nginx in public subnet) and server2 (api server on AWS EC2 in private subnet). In the security group of AWS for Server2 I have set port access only for Server1 security group (instead of open access).

In nginx config file I have added :

 location /myapi/ {                          proxy_pass http://xxx.xxx.x.xxx:9000;  //API server2                     }  

Now from my application I can access server2 API via internal calls without any issue. But the issue is I can also directly access http://myapplication.com/myapi . I want to prevent any direct access to http://myapplication.com/myapi but this should work when called from within my application running on Server1 (this IP).

I am not sure if this is OAuth domain I should be looking at to secure communication. Or is it something at AWS Security Group level or nginx level I should be setting .

Thank you for any pointers/insights.

connecting strongswan to amazon VPC

Posted: 30 Apr 2021 08:31 AM PDT

i've established BGP connection from my Linux box to Amazon VPC - using this guide: https://www.edge-cloud.net/2019/07/18/aws-site-2-site-vpn-with-strongswan-frrouting/#strongswan-setup

The only strange thing is that on IPtables mangle table:

i dont see any matches on MARK f-ction. (and IPSEC is still working (at least for now) dont know is it bad or no:

Chain PREROUTING (policy ACCEPT 307M packets, 337G bytes)   pkts bytes target     prot opt in     out     source               destination             Chain INPUT (policy ACCEPT 207M packets, 207G bytes)   pkts bytes target     prot opt in     out     source               destination               0     0 MARK       esp  --  *      *       xx.xx.204.63         xx.xx.xx.251         MARK set 0x64      0     0 MARK       esp  --  *      *       xx.xx..121.249       xx.xx.xx.251         MARK set 0xc8    Chain FORWARD (policy ACCEPT 100M packets, 131G bytes)   pkts bytes target     prot opt in     out     source               destination           78389 4702K TCPMSS     tcp  --  *      VTI_awssg1  0.0.0.0/0            0.0.0.0/0            tcp flags:0x06/0x02 TCPMSS clamp to PMTU    807 48404 TCPMSS     tcp  --  *      VTI_awssg2  0.0.0.0/0            0.0.0.0/0            tcp flags:0x06/0x02 TCPMSS clamp to PMTU    Chain OUTPUT (policy ACCEPT 90M packets, 73G bytes)   pkts bytes target     prot opt in     out     source               destination             Chain POSTROUTING (policy ACCEPT 192M packets, 205G bytes)   pkts bytes target     prot opt in     out     source               destination     

Any ideas ? Thanks

dwmblocks clicks not working in arch linux?

Posted: 30 Apr 2021 08:19 AM PDT

I have dwmblocks installed with dwm on arch linux. As per suckless website i have followed the steps which are written over there.

  1. Patched dwm with statuscmd (signal version)
  2. Patched dwmblocks with dwmblocks-statuscmd as required
  3. I added $BUTTON variable in my script running in dwmblocks
  4. After Patching i installed both dwm and dwmblocks again with sudo make clean install
  5. But when i restarted X-server clicks were not working and i also restarted my pc but it didn't helped.

i don't have status2d patch installed

What is happening when i am clicking on statusbar --> a part of status bar going invisible normal status bar image status bar after clicking

  • This part is in my dwm/config.h file which handles mouse clicks, added after patching the file
{ ClkStatusText,    0,      Button1,        sigstatusbar,   {.i = 1} },  { ClkStatusText,    0,      Button2,        sigstatusbar,   {.i = 2} },  { ClkStatusText,    0,      Button3,        sigstatusbar,   {.i = 3} },  
  • and below is the script displaying values in statusbar and handling the output of click events
#!/bin/sh    dwm_battery () {      # Change BAT1 to whatever your battery is identified as. Typically BAT0 or BAT1      CHARGE=$(cat /sys/class/power_supply/BAT0/capacity)      STATUS=$(cat /sys/class/power_supply/BAT0/status)                if [[ $STATUS = "Charging" ]]; then              if [[ $CHARGE -lt 11 ]]; then                  printf "  |  "              elif [[ $CHARGE -ge 11 && $CHARGE -lt 44 ]]; then                  printf " $CHARGE  |  "              elif [[ $CHARGE -ge 45 && $CHARGE -lt 66 ]]; then                  printf " $CHARGE  |  "              elif [[ $CHARGE -ge 66 && $CHARGE -lt 90 ]]; then                  printf " $CHARGE  |  "              else                   printf " $CHARGE  |  "              fi        elif [[ $STATUS = "Not charging" ]]; then              printf " $CHARGE  |  "            else               if [[ $CHARGE -lt 11 ]]; then                  printf " |  "              elif [[ $CHARGE -ge 11 && $CHARGE -lt 44 ]]; then                  printf " $CHARGE |  "              elif [[ $CHARGE -ge 45 && $CHARGE -lt 66 ]]; then                  printf " $CHARGE |  "              elif [[ $CHARGE -ge 66 && $CHARGE -lt 90 ]]; then                  printf " $CHARGE |  "              else                   printf " $CHARGE |  "              fi            fi  }    dwm_battery      case $BUTTON in           1) notify-send "button 1" ;;          2) notify-send "button 2" ;;          3) notify-send "button 3" ;;  esac  

I tried this script with setting BUTTON as value 1 and it worked so there is no problem with the script.


This is my configuration file for dwm, if this helps

bind held key without blocking typing

Posted: 30 Apr 2021 07:53 AM PDT

I have a foot pedal that I would like to toggle a file's contents when pressed and released. I have got the foot pedal acting like a normal key with a keycode and disabled it from repeating with xset -r <keycode>. But when I bind it to a command with sxhkd/xbindkeys they block me from typing while it is pressed. I need to be able to type while it is pressed.

sxhkd:

F25      echo dictate > ~/mode  @F25      echo command > ~/mode  

Unable to open device '/dev/sdb' for writing! Errno is 30! Aborting write!

Posted: 30 Apr 2021 07:23 AM PDT

1st set of commands

Using these commands with GDISK:

> sudo gdisk /dev/sdb  o  n  w  

I get:

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!

Do you want to proceed? (Y/N): Y

OK; writing new GUID partition table (GPT) to /dev/sdb.

Unable to open device '/dev/sdb' for writing! Errno is 30! Aborting write!

2nd set of commands

Also using another set of GDISK commands:

> sudo gdisk /dev/sdb  x  z  

I get:

About to wipe out GPT on /dev/sdb. Proceed? (Y/N): Y

Problem opening '/dev/sdb' for writing! Program will now terminate.

Blank out MBR? (Y/N): Y

Warning! MBR not overwritten! Error is 30!

Why

The device is a flash USB drive.

  • What might be the cause?
  • What else can I try?

DD

The dd couldn't help:

> sudo dd if=/dev/zero of=/dev/sdb status=progress  dd: failed to open '/dev/sdb': Read-only file system  

FDISK

FDISK didn't help too:

> sudo fdisk /dev/sdb    Welcome to fdisk (util-linux 2.33.1).  Changes will remain in memory only, until you decide to write them.  Be careful before using the write command.    fdisk: cannot open /dev/sdb: Read-only file system  

HDPARM

I had already used hdparm:

> sudo hdparm -r0 /dev/sdb    /dev/sdb:   setting readonly to 0 (off)   readonly      =  0 (off)  

Run docker on a Linux VM?

Posted: 30 Apr 2021 07:04 AM PDT

I am trying to run docker on a Linux VM on a Windows host machine.

VM is:

OS Debian GNU/Linux 10 (buster) x86_64   Host VirtualBox 1.2  Kernel 4.19.0-16-amd64  

Trying to launch sudo apt-get install docker-ce shows an error message:

It shows docker-ce has having some unmet depencdencies:

 docker-ce : Dipende: containerd.io (>= 1.4.1) ma non sta per essere installato               Dipende: docker-ce-cli ma non sta per essere installato               Dipende: libc6 (>= 2.32) ma la versione 2.28-10 sta per essere installata               Raccomanda: docker-ce-rootless-extras ma non sta per essere installato  

Is it possible to print multiple copies to PDF using CUPS/lpr

Posted: 30 Apr 2021 07:56 AM PDT

I'm testing an application that sends PDFs to a printer, and it may request that multiple copies are printed, sending a command like

/usr/bin/lpr -T Document Title -# 10  

I don't have access to a physical printer, so I'm using the CUPS generic virtual printer to "print" PDFs to disk. This works, however regardless of the number of copies requested, it only prints a single file.

$ echo hello > hello.txt  $ lpr -P PDF -T test -# 2  hello.txt   $ ls PDF/  test.pdf  

Is there a configuration that will respect the copies argument, for example creating hello-1.pdf and hello-2.pdf?

  • Distribution is Debian 9.
  • cups: 2.2.1-8+deb9u6
  • cups-bsd: 2.2.1-8+deb9u6
  • cups-pdf: 2.6.1-22
  • grep MaxCopies /etc/cups/cupsd.conf -> MaxCopies 100

selinux: files in /home are created with wrong context

Posted: 30 Apr 2021 07:05 AM PDT

on Cents 7 folders/files created in /home are getting wrong context

# pwd  /home  # ls  # ls -dZ .  drwxr-xr-x. root root system_u:object_r:home_root_t:s0 .  # mkdir -p test/.ssh  # touch test/.ssh/authorized_keys  # ls -ldZ test test/.ssh  drwxr-xr-x. root root unconfined_u:object_r:home_root_t:s0 test  drwxr-xr-x. root root unconfined_u:object_r:home_root_t:s0 test/.ssh  -rw-r--r--. root root unconfined_u:object_r:home_root_t:s0 test/.ssh/authorized_keys  #  

The same problem happens when a user's home folder is created with first ssh login (user is then not able to use ssh key authorization due to selinux complains).

The context problem does not exist when a root user does su - user, then home folder is created in right context.

How to learn sshd/ssh sessions to create new user's home directories in proper context?

No to fix it user need to run restorecon command, example:

# ls -ldZ test/.ssh/authorized_keys test test/.ssh  drwxr-xr-x. root root unconfined_u:object_r:home_root_t:s0 test  drwxr-xr-x. root root unconfined_u:object_r:home_root_t:s0 test/.ssh  -rw-r--r--. root root unconfined_u:object_r:home_root_t:s0 test/.ssh/authorized_keys  # restorecon -vR test  restorecon reset /home/test context unconfined_u:object_r:home_root_t:s0->unconfined_u:object_r:user_home_dir_t:s0  restorecon reset /home/test/.ssh context unconfined_u:object_r:home_root_t:s0->unconfined_u:object_r:ssh_home_t:s0  restorecon reset /home/test/.ssh/authorized_keys context unconfined_u:object_r:home_root_t:s0->unconfined_u:object_r:ssh_home_t:s0  # ls -ldZ test/.ssh/authorized_keys test test/.ssh  drwxr-xr-x. root root unconfined_u:object_r:user_home_dir_t:s0 test  drwxr-xr-x. root root unconfined_u:object_r:ssh_home_t:s0 test/.ssh  -rw-r--r--. root root unconfined_u:object_r:ssh_home_t:s0 test/.ssh/authorized_keys  #  

How to remove old UEFI boot entry?

Posted: 30 Apr 2021 08:30 AM PDT

Every time my computer boots, I face this GNU GRUB menu:

GRUB menu

I don't know how to remove it and every time I boot, I have to write exit and then I have to choose the debian entry manually.

BIOS boot menu

I've looked online and tried sudo efibootmgr -v

BootCurrent: 0005  Timeout: 0 seconds  BootOrder: 0000,2001,3000,0005,0001,2002,2004  Boot0000* ubuntu    HD(1,GPT,996936a8-a9d6-4eaf-8a27-9db36650aa88,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)RC  Boot0001* Notebook Hard Drive - SanDisk SD8SN8U-256G-1006   BBS(HD,Notebook Hard Drive - SanDisk SD8SN8U-256G-1006,0x500)................-...........A.......................................z.......A.........................  Boot0005* debian    HD(1,GPT,996936a8-a9d6-4eaf-8a27-9db36650aa88,0x800,0x100000)/File(\EFI\debian\shimx64.efi)  Boot2001* EFI USB Device    RC  Boot3000* Internal Hard Disk or Solid State Disk    RC  

and removing it using sudo efibootmgr -b 0000 -B, but when I restart my pc, it goes back to the GNU GRUB menu again.

I also tried changing the boot order, but it also didn't work.

How do I remove the Ubuntu UEFI boot entry?

awk: split datetime column into three separate columns in a csv

Posted: 30 Apr 2021 09:50 AM PDT

I am trying to use substr to split a datetime column, the fifth one (previous_test) into three different ones at the end.

Input:

id,tester,company,chief,previous_test,test,date,result,cost  6582983b-61d4-4371-912d-bbc76bb8208b,Audrey Feest,Pagac-Gorczany,Claudine Moakson,18/02/2019,Passwords,20/05/2020,none,£11897.96  

Expected Output:

id,tester,company,chief,previous_test,test,date,result,cost,day,month,year  6582983b-61d4-4371-912d-bbc76bb8208b,Audrey Feest,Pagac-Gorczany,Claudine Moakson,18/02/2019,Passwords,20/05/2020,none,£11897.96,18,02,2019  

I've tried using:

awk -F, -v OFS="," '{s = substr($5, 1, 2)} {g = substr($5, 4, 2)} {l = substr($5, 7, 4)} {print s, g, l}' file.csv  

And all I get is only the date separated by commas, but not as three additional columns appended to the existing columns.

I am missing how to append the output into three separate columns.

sed script to print n lines after the last occurence of a match

Posted: 30 Apr 2021 09:28 AM PDT

Here is a link to print all lines following the last occurrence of a match.

However, I only want to print two lines after the last occurence of a match. How would I do that?

How to send JSON message to SLACK?

Posted: 30 Apr 2021 08:53 AM PDT

I have a following JSON file:

{    "blocks": [      {        "type": "section",        "text": {          "type": "mrkdwn",          "text": "Success: ondrejdolezal93's workflow (<https://circleci.com/api/v1.1/project/github/integromat/docker-db-updater/628|build>) in <https://app.circleci.com/pipelines/github/integromat/docker-db-updater%7Cintegromat/docker-db-updater> (<https://app.circleci.com/pipelines/github/integromat/docker-db-updater?branch=main%7Cmain>)\n- Fix update version (<https://github.com/integromat/docker-db-updater/commit/9a5b8d61a5c79dabbb2a47bb68b33748034986aa%7C9a5b8d6> by ondrejdolezal93)"        }      }    ]  }  

I am trying to send a message using Slack webhook with use of curl.

My command is like following:

curl -X POST -H 'Content-type: application/json' --data @message.json $SLACK_WEBHOOK_URL  

The reply from curl is: no_text

Please, what am I doing wrong? JSON is formatted according to Slack API documentation.

Permanently and completely disable onscreen keyboard for Ubuntu 20.04

Posted: 30 Apr 2021 07:27 AM PDT

I just installed a fresh copy of the (currently) newest Ubuntu LTS. Because I did this on an Acer with Touchscreen capability, it decided I want an on-screen keyboard which I very much do NOT. It's constantly in the way and completely unnecessary.

I've been searching for days for a solution, but it's already off in the settings, I can't find any way to force remove it from the filesystem or otherwise make it behave. How do I deal with this nuisance? As is, there's no way the computer will work for what I need.

Using jq in malformed json file

Posted: 30 Apr 2021 08:39 AM PDT

Hello is it possible to use jq to extract field in a malformed json file

{      '_id': ObjectId('58049da30b78a4a11e3c9869'),      'name': 'joe bam',      'username': 'joe_bam',      'contact_info': {          'email': 'N/a@mail.com'      },      'color': 'Blue',      'updated_at': datetime.datetime(2017, 5, 18, 11, 16, 19, 737000),      'created_at': datetime.datetime(2016, 10, 17, 9, 45, 7, 226000),      'token': '$2y$10$VMgv1S/NiGzkPsGhc4S.6eGFvEXv5YenlWQNdqUbVy4aGaeKOyxpi',      'views': 29,      'status_logged': True,      'provider': 'signup'  }  

In this example i want to extract jq -r '[.name, .username, |contact_info .email] | @csv'

Is that possible ?? cause i have validate this json and give me errors

Thanks!

How to install clang-10 on Ubuntu 20.4

Posted: 30 Apr 2021 10:30 AM PDT

I upgraded my Linux box from Ubuntu 18.04 to 20.04.

I need to install the clang suite of compilers and the apt command is giving me errors. I've searched many possible solutions but so far none of the recommendations I have found to solve similar problems have helped. Here is what I get when I try apt install clang :

➜ ~ sudo apt-get install -f clang
Reading package lists... Done Building dependency tree
Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies: clang : Depends: clang-10 (>= 10~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ➜ ~

I've done all sorts of apt-get update and apt-get upgrade. I've also tried to list every package one after the other, but the list keeps growing and it hasn't solved the issue.

Edit: Following @Martin Konrad I tried aptitude:

➜  ~ sudo aptitude install clang  The following NEW packages will be installed:    clang clang-10{ab} lib32gcc-s1{ab} lib32gcc1{ab} lib32stdc++6{ab}     libc6-i386{a} libclang-common-10-dev{ab} libclang-cpp10{ab} libffi-dev{a}     libobjc-9-dev{ab} libobjc4{ab} libomp-10-dev{a} libomp5-10{a} libpfm4{a}     libz3-4{a} libz3-dev{a} llvm-10{a} llvm-10-dev{ab} llvm-10-runtime{a}     llvm-10-tools{a} python3-pygments{a}   0 packages upgraded, 21 newly installed, 0 to remove and 9 not upgraded.  Need to get 58.4 MB of archives. After unpacking 381 MB will be used.  The following packages have unmet dependencies:   lib32stdc++6 : Depends: gcc-10-base (= 10-20200411-0ubuntu1) but 10-20200416-0ubuntu1~18.04 is installed   libobjc4 : Depends: gcc-10-base (= 10-20200411-0ubuntu1) but 10-20200416-0ubuntu1~18.04 is installed   lib32gcc-s1 : Depends: gcc-10-base (= 10-20200411-0ubuntu1) but 10-20200416-0ubuntu1~18.04 is installed   clang-10 : Depends: libclang1-10 (= 1:10.0.0-4ubuntu1) but 1:10.0.1~++20200519100828+f79cd71e145-1~exp1~20200519201452.38 is installed   lib32gcc1 : Depends: gcc-10-base (= 10-20200411-0ubuntu1) but 10-20200416-0ubuntu1~18.04 is installed   libclang-cpp10 : Depends: libllvm10 (= 1:10.0.0-4ubuntu1) but 1:10.0.1~++20200519100828+f79cd71e145-1~exp1~20200519201452.38 is installed   libobjc-9-dev : Depends: gcc-9-base (= 9.3.0-10ubuntu2) but 9.3.0-11ubuntu0~18.04.1 is installed                   Depends: libgcc-9-dev (= 9.3.0-10ubuntu2) but 9.3.0-11ubuntu0~18.04.1 is installed   libclang-common-10-dev : Depends: libllvm10 (= 1:10.0.0-4ubuntu1) but 1:10.0.1~++20200519100828+f79cd71e145-1~exp1~20200519201452.38 is installed   llvm-10-dev : Depends: libllvm10 (= 1:10.0.0-4ubuntu1) but 1:10.0.1~++20200519100828+f79cd71e145-1~exp1~20200519201452.38 is installed  The following actions will resolve these dependencies:          Keep the following packages at their current version:  1)      clang [Not Installed]                                2)      clang-10 [Not Installed]                             3)      lib32gcc-s1 [Not Installed]                          4)      lib32gcc1 [Not Installed]                            5)      lib32stdc++6 [Not Installed]                         6)      libclang-common-10-dev [Not Installed]               7)      libclang-cpp10 [Not Installed]                       8)      libobjc-9-dev [Not Installed]                        9)      libobjc4 [Not Installed]                             10)     llvm-10-dev [Not Installed]                                  Leave the following dependencies unresolved:           11)     llvm-10 recommends llvm-10-dev                             Accept this solution? [Y/n/q/?] Y  No packages will be installed, upgraded, or removed.  0 packages upgraded, 0 newly installed, 0 to remove and 9 not upgraded.  Need to get 0 B of archives. After unpacking 0 B will be used.       

Its solution is to not install effectively.

Removing blackarch completely from system

Posted: 30 Apr 2021 07:17 AM PDT

I've installed BlackArch like an idiot and not too long ago I tried to remove all files but there are still some crumbs left of it. I tried to update the packages through the terminal and this is what I got:

sudo pacman -Syyu  :: Synchronizing package databases...   core                     148.9 KiB   242K/s 00:01 [######################] 100%   extra                   1759.7 KiB   296K/s 00:06 [######################] 100%   community                  5.3 MiB   568K/s 00:10 [######################] 100%   multilib                 183.2 KiB  1263K/s 00:00 [######################] 100%   blackarch                  2.7 MiB   752K/s 00:04 [######################] 100%   blackarch.sig            566.0   B  0.00B/s 00:00 [######################] 100%  error: blackarch: signature from "Levon 'noptrix' Kayan (BlackArch Developer) <noptrix@nullsecurity.net>" is invalid  error: failed to update blackarch (invalid or corrupted database (PGP signature))  error: failed to synchronize all databases  

How do I completely remove all instances of BlackArch from my computer? I don't want it to consistently look for its package updates!

I tried the following:

paclist blackarch | cut -d' ' -f1 | xargs sudo pacman -R  checking dependencies...  error: failed to prepare transaction (could not satisfy dependencies)  :: bind-tools: removing geoip breaks dependency 'geoip'  :: cryptsetup: removing argon2 breaks dependency 'argon2'  :: gnome-color-manager: removing exiv2 breaks dependency 'exiv2'  :: gnome-nettool: removing iputils breaks dependency 'iputils'  :: libgexiv2: removing exiv2 breaks dependency 'exiv2'  :: php: removing argon2 breaks dependency 'argon2'  

xrdp disconnects immediately after connection from Windows10/Centos to Centos7

Posted: 30 Apr 2021 07:54 AM PDT

This is my xrdp config:

[Globals]  ini_version=1  fork=true  port=3389  use_vsock=false  tcp_nodelay=true  tcp_keepalive=true  security_layer=negotiate  crypt_level=high  certificate=  key_file=  ssl_protocols=TLSv1.2, TLSv1.3  autorun=  allow_channels=true  allow_multimon=true  bitmap_cache=true  bitmap_compression=true  bulk_compression=true  max_bpp=128  use_compression=yes  new_cursors=true  use_fastpath=both  blue=009cb5  grey=dedede  ls_top_window_bg_color=009cb5  ls_width=350  ls_height=430  ls_bg_color=dedede  ls_logo_filename=  ls_logo_x_pos=55  ls_logo_y_pos=50  ls_label_x_pos=30  ls_label_width=65  ls_input_x_pos=110  ls_input_width=210  ls_input_y_pos=220  ls_btn_ok_x_pos=142  ls_btn_ok_y_pos=370  ls_btn_ok_width=85  ls_btn_ok_height=30  ls_btn_cancel_x_pos=237  ls_btn_cancel_y_pos=370  ls_btn_cancel_width=85  ls_btn_cancel_height=30  [Logging]  LogFile=xrdp.log  LogLevel=DEBUG  EnableSyslog=true  SyslogLevel=DEBUG  [Channels]  rdpdr=true  rdpsnd=true  drdynvc=true  cliprdr=true  rail=true  xrdpvr=true  tcutils=true  [Xvnc]  name=Xvnc  lib=libvnc.so  username=ask  password=ask  ip=127.0.0.1  port=-1  [Xorg]  name=Xorg  lib=libxup.so  username=ask  password=ask  ip=127.0.0.1  port=-1  code=20  

I am trying to connect with mstsc to this machine (this is after fresh pc restart, noone has logged in):

enter image description here

while in this login box, no disconnect happens:

enter image description here

after I put there correct login/password, I get black screen first and then mstsc window closes. I tried to connect from KDE remote connection application, but it also failed same way. xrdp.log doesn't seem to contain anything interesting:

[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: connecting to sesman ip 127.0.0.1 port 3350  [20190606-04:14:36] [INFO ] xrdp_wm_log_msg: sesman connect ok  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: sending login info to session manager, please wait...  [20190606-04:14:36] [DEBUG] return value from xrdp_mm_connect 0  [20190606-04:14:36] [INFO ] xrdp_wm_log_msg: login successful for display 10  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC started connecting  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC connecting to 127.0.0.1 5910  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC tcp connected  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC security level is 2 (1 = none, 2 = standard)  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC password ok  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending share flag  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving server init  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving pixel format  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving name length  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving name  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending pixel format  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending encodings  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending framebuffer update request  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending cursor  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC connection complete, connected ok  [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: connected ok  [20190606-04:14:36] [DEBUG] xrdp_mm_connect_chansrv: chansrv connect successful  [20190606-04:14:36] [DEBUG] Closed socket 18 (AF_INET 127.0.0.1:47744)  [20190606-04:14:37] [DEBUG] Closed socket 20 (AF_UNIX)  [20190606-04:14:37] [DEBUG] Closed socket 12 (AF_INET 127.0.0.1:3389)  [20190606-04:14:37] [DEBUG] xrdp_mm_module_cleanup  [20190606-04:14:37] [DEBUG] VNC mod_exit  [20190606-04:14:37] [DEBUG] Closed socket 19 (AF_INET 127.0.0.1:40224)  

How can I fix that?

How to choose Qt installation?

Posted: 30 Apr 2021 09:16 AM PDT

I've built (configure, make, make install) Qt 5.4.2 from sources on my Debian 7.8 x64. Now I try to build Qt Creator, but my system is unable to locate this specific Qt build.

Firstly, I tried to set $QTDIR and $PATH, but it didn't work. Then, I have read that it is necessary to use qtchooser. So, this is what I have so far:

ber@mydeb:/usr/lib/x86_64-linux-gnu/qtchooser$ qtchooser -l  4  5  @5  @qt5  default  qt4-x86_64-linux-gnu  qt4  qt5-x86_64-linux-gnu  qt5  

Initially, there was no default conf, but I've created it using the following:

ber@mydeb:/usr/lib/x86_64-linux-gnu/qtchooser$ sudo nano default.conf  

with the text:

QT_SELECT="5"  QTTOOLDIR="/usr/local/Qt-5.4.1/bin"  QTLIBDIR="/usr/local/Qt-5.4.1/"  

After this, my system still used a wrong Qt installation (from the /usr/lib/x86_64-linux-gnu/ folder), which does not contain a working Qt installation.

Then, I tried to set QT_SELECT=default, then QT_SELECT=5 and here is what I have now:

qtchooser -print-env  QT_SELECT="qt5"  QTTOOLDIR="QT_SELECT="5""  QTLIBDIR="QTTOOLDIR="/usr/local/Qt-5.4.1/bin""  

i.e., QTTOOLDIR=variable is wrong and here is an error displayed when I try to use qmake:

qmake -v  qmake: could not exec 'QT_SELECT="5"/qmake': No such file or directory  

What should I do to choose the correct Qt installation (the one installed to the /usr/local/Qt-5.4.1/ folder)?

How to disable user specific tmpfs /run/user/1000, tmpfs

Posted: 30 Apr 2021 07:32 AM PDT

Is there any way to disable system creating user specific tmp mount /run/user/1000, tmpfs per each log-in session?

I know this is a new feature, but I want to get the system running older way.

How to set UIDs and GIDs during install of Debian?

Posted: 30 Apr 2021 09:07 AM PDT

During installation of Debian the default initial UID is 1000 and GID is 1000. When creating a user account during install, is there a way to specify their specific values, for example 3197?

In scripts is partx better than fdisk for reading partition table?

Posted: 30 Apr 2021 08:07 AM PDT

I have had some scripts, that used the output of "fdisk -l", fail on different versions of Linux, because the output of fdisk differs slightly.

The "partx --show" command appears to be standard on most systems (packaged along with fdisk).

If I convert scripts to parse the output of "partx --show", will these scripts perform better over time? Have your scripts that use partx been stable or portable across releases?

Untar filenames in a character encoding different from encoding used in the filesystem

Posted: 30 Apr 2021 10:02 AM PDT

I occasionally get tarballs where the filenames are encoded in ISO-8859-1 or some other pre-Unicode scheme. My system uses UTF-8, so when I untar these archives with the usual options (tar xvf foo.tar) I end up with a directory full of mojibake filenames.

Until now I've been using convmv to convert the filenames to UTF-8 after they've been extracted. This is a bit incovenient, because I either need to invoke convmv on each affected file, or else untar the file into a new directory, run convmv on the entire directory, and then move the files to where I wanted them originally. Short of coding this functionality into a shell script, is there some way of converting the archived filenames to UTF-8 on the fly, as they are being untarred?

How to extract logs between two time stamps

Posted: 30 Apr 2021 08:43 AM PDT

I want to extract all logs between two timestamps. Some lines may not have the timestamp, but I want those lines also. In short, I want every line that falls under two time stamps. My log structure looks like:

[2014-04-07 23:59:58] CheckForCallAction [ERROR] Exception caught in +CheckForCallAction :: null  --Checking user--  Post  [2014-04-08 00:00:03] MobileAppRequestFilter [DEBUG] Action requested checkforcall  

Suppose I want to extract everything between 2014-04-07 23:00 and 2014-04-08 02:00.

Please note the start time stamp or end time stamp may not be there in the log, but I want every line between these two time stamps.

No comments:

Post a Comment