Tuesday, November 16, 2021

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


How to install Jekyll with rbenv on a Debian-based distro?

Posted: 16 Nov 2021 11:47 AM PST

What I have tried:

Installing ruby with: rbenv install 2.4.1

Then installing Jekyll with gem: gem install bundler jekyll

Error messages:

user@user:~/git$ rbenv global 2.4.1  user@user:~/git$ gem install jekyll bundle  ERROR:  While executing gem ... (Gem::FilePermissionError)      You don't have write permissions for the /var/lib/gems/2.5.0 directory.  user@user:~/git$ gem install jekyll bundler  ERROR:  While executing gem ... (Gem::FilePermissionError)      You don't have write permissions for the /var/lib/gems/2.5.0 directory.  user@user:~/git$   user@user:~/git$ rbenv shell 2.4.1  rbenv: no such command `shell'  user@user:~/git$     

I also tried the instalation script again:

user@user:~/git$ rbenv install 2.4.1  rbenv: /home/user/.rbenv/versions/2.4.1 already exists  continue with installation? (y/N) y  Downloading ruby-2.4.1.tar.bz2...  -> https://cache.ruby-lang.org/pub/ruby/2.4/ruby-2.4.1.tar.bz2  Installing ruby-2.4.1...  Installed ruby-2.4.1 to /home/user/.rbenv/versions/2.4.1    user@user:~/git$  

OS: Trisquel ( Ubuntu based )


Thanks for any help you can provide!

SSH permission denied error on ssh to localhost, freshly reset SSH keys

Posted: 16 Nov 2021 11:41 AM PST

I'm running into (what I think is) a very odd situation.

I was having some trouble connecting a specific user to another host via SSH. So, I thought I'd try a simple re-initialization of all SSH keys for this user. So, I did:

cd ~/.ssh  rm *  ssh-keygen -t rsa  cp id_rsa.pub authorized_keys  chmod 600 authorized_keys  

Which I thought would have totally reset my SSH connection. But now, when I do:

ssh localhost  

I get:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).  

If I do:

ssh -v localhost  

I see:

OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 58: Applying options for *  debug1: Connecting to localhost [127.0.0.1] port 22.  debug1: Connection established.  debug1: identity file /home/lkushwaha/.ssh/id_rsa type 1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_rsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_dsa type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_dsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_ecdsa type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_ecdsa-cert type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_ed25519 type -1  debug1: key_load_public: No such file or directory  debug1: identity file /home/lkushwaha/.ssh/id_ed25519-cert type -1  debug1: Enabling compatibility mode for protocol 2.0  debug1: Local version string SSH-2.0-OpenSSH_7.4  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4  debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000  debug1: Authenticating to localhost:22 as 'lkushwaha'  debug1: SSH2_MSG_KEXINIT sent  debug1: SSH2_MSG_KEXINIT received  debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: curve25519-sha256 need=64 dh_need=64  debug1: kex: curve25519-sha256 need=64 dh_need=64  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:lLEq3WpI9BbwnU8WXoeWp0s/DFJr7UwmnnC1nUA4KKc  debug1: Host 'localhost' is known and matches the ECDSA host key.  debug1: Found key in /home/lkushwaha/.ssh/known_hosts:1  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug1: SSH2_MSG_NEWKEYS received  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic  debug1: Next authentication method: gssapi-keyex  debug1: No valid Key exchange context  debug1: Next authentication method: gssapi-with-mic  debug1: Unspecified GSS failure.  Minor code may provide more information  No Kerberos credentials available (default cache: KEYRING:persistent:1074)    debug1: Unspecified GSS failure.  Minor code may provide more information  No Kerberos credentials available (default cache: KEYRING:persistent:1074)    debug1: Next authentication method: publickey  debug1: Offering RSA public key: /home/lkushwaha/.ssh/id_rsa  debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic  debug1: Trying private key: /home/lkushwaha/.ssh/id_dsa  debug1: Trying private key: /home/lkushwaha/.ssh/id_ecdsa  debug1: Trying private key: /home/lkushwaha/.ssh/id_ed25519  debug1: No more authentication methods to try.  Permission denied (publickey,gssapi-keyex,gssapi-with-mic).  

I don't understand what's going on. It seems to me, a simple ssh to localhost should work just fine.

Any clues? Thanks!

How can I use a prefix in the <glob> element of a mime info file?

Posted: 16 Nov 2021 11:23 AM PST

I have a bunch of files in the form of xyz-timestamp.log, which I want to associate with a different application, than normal log files.

I read up on mime types and found out, that I can add a mime-info file to .local/share/mime/packages describing my new mime type.

The file I came up with looks like this:

<?xml version="1.0" encoding="utf-8"?>  <mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info">      <mime-type type="text/x-xyz-log">          <comment>XYZ Log</comment>          <glob pattern="xyz*.log" weight="100" />      </mime-type>  </mime-info>  

But it doesn't catch. It seems like only *.log patterns or specific files like sources.list for apt work and not one, where the asterisk is in the middle.

The free desktop standard mentions nothing about not supporting certain glob features or what have you, so I'm completely in the dark on this.

Does tc (traffic control) enqueue packets or frames?

Posted: 16 Nov 2021 11:16 AM PST

Assume we are working with TCP/IP connection.

According to https://wiki.linuxfoundation.org/networking/kernel_flow tc works on Layer 2. However, everywhere else in the internet (man, lartc, ..) the term "packet" is used to describe the units of data that tc works with.

What does tc actually enqueue - Packets or Frames?


The answer is important to help me understand following scenario:

Assume we use PRIO qdisc with 2 classes. A big packet (say 100Kbytes) comes from low-prio source. The packet is divided into some frames, which get classified to an appropriate queue. The transmission starts and then the high-prio packet is generated. It gets classified to appropriate high-priority queue and needs to be sent asap. Do the frames of this high-prio data get transmitted as soon as they arrive or must they wait until all of those 100Kbytes of low-prio data are sent completely?

how do I install the yast clone_system module from a command line

Posted: 16 Nov 2021 11:03 AM PST

# yast clone_system  No such client module clone_system  

is the error I get after booting SLE-15-SP3-Full-x86_64-GM-Media1.iso and choosing the Basesystem-Module_15.3-0, Public-Cloud-Module_15.3-0, and Server-Applications-Module_15.3-0 repositories.

# zypper lr  # | Alias                             | Name                           | Enabled | GPG Check | Refresh  --+-----------------------------------+--------------------------------+---------+-----------+--------  1 | Basesystem-Module_15.3-0          | sle-module-basesystem          | No      | ----      | ----  2 | Public-Cloud-Module_15.3-0        | sle-module-public-cloud        | No      | ----      | ----  3 | SLES15-SP3-15.3-0                 | SLES15-SP3-15.3-0              | No      | ----      | ----  4 | Server-Applications-Module_15.3-0 | sle-module-server-applications | No      | ----      | ----  

Why nothing is enabled is confusing ...

# zypper mr -e 1 2 3 4  

and finally

# zypper in autoyast2  

allows the yast clone_system command to create an autoyast.xml file.

So is there some shortcut that would have allowed me to skip all the setup (AND the hours spent searching for these few simple command lines) - what am I missing?

remove specific folder and files on remote machine

Posted: 16 Nov 2021 10:40 AM PST

I need to remove all the files from specific folder that is copied to remote machine using following command:

tar -c test_sandy | ssh sky@my_home_vm 'tar -xf - -C /scratch/backup'  

I see all the files from test_sandy has been copied to remote machine, now I wanted to remove these files after say one minute:

ssh my_home_vm find $backup_path/test_sandy/* -type d -mmin +1 -exec rm -rf {} \;    

But I don't see those file getting deleted, but if, I run the delete it directly from remote machine (instead doing "ssh my_home_vm") it worked.

Interactive bash in Docker under mingw on Windows

Posted: 16 Nov 2021 11:24 AM PST

I'm using a bash shell (Mingwg64) on windows, to run bash from a docker container.

Tobi@DESKTOP MINGW64 /  $ docker run -i debian bash  ls  

gives the result: bash: line 1: $'ls\r': command not found which from what I can tell is because there's a \r prepended to the usual \n when I press the enter key - as I'm on windows.

Anyone know a good fix for this?

Fresh Fedora 35 - what are these active internet connections doing?

Posted: 16 Nov 2021 10:49 AM PST

To get to this point I had a Fedora workstation 34 install- iso about a week old - ran update, rebooted, click Install Fedora 35 as it was on offer from gnome Software. Hadn't even opened Firefox yet. Didn't install anything else.

I ran netstat just to see what was happening by default. There were a couple established connections to Cloudflare addresses which had no additional whois info to go on, ok

and one to an oscp-router.gno (assuming thats Gnome extensions),

But this one really stands out:

Active Internet connections (w/o servers)  Proto Recv-Q Send-Q Local Address Foreign Address State  tcp 25 0 fedora:58440 proxy14.fedorapro:https CLOSE_WAIT  

Why did Fedora connect to proxy14.fedorapro ? I can't find any information about this. fedorapro.com is a parked domain.

How to sync time across operating systems in a triple boot setup?

Posted: 16 Nov 2021 10:17 AM PST

I have installed Windows, Ubuntu and Kali Linux on my laptop but when I switch OS the system time changes.

I tried following a Youtube tutorial and created a new file called "RealTimeIsUniversal" in the TimeZoneInformation folder of Regedit in Windows. Then I set it's value data as 1. But this did not solve the problem.

Anyone know how to solve it?

Escape characters in sed transliterate

Posted: 16 Nov 2021 11:14 AM PST

I want to use sed's transliterate (y///) to replace one set of characters by another.

I would expect this to work just as well as using the tr program.

$ echo '[]{}abc' | tr '[ab}' 'gefh'   g]{hefc  

However, when I go to perform this same operation with sed, I see the following error:

$ echo '[]{}abc' | sed 'y/[ab}/gefh/'   sed: 1: "y/[ab}/gefh/": unbalanced brackets ([])  

This makes some sense, as I expect to need to escape the [ character. However, when I do try and escape that, I receive the following, different error:

$ echo '[]{}abc' | sed 'y/\[ab}/gefh/'   sed: 1: "y/\[ab}/gefh/": transform strings are not the same length  

My current work-around is to either (1) just use tr or (2) insert a "dummy character" in the right-hand side of the transliteration whose job is to do nothing but match the escape character.

$ echo '[]{}abc' | sed 'y/\[ab}/_gefh/'   g]{hefc  

This is however unsatisfying and suspicious. It's also not very safe, e.g. when \ is in the input string.

$ echo '[]{}abc\' | sed 'y/\[ab}/_gefh/'   g]{hefc_  

What's the correct way to escape a character in a sed transliteration without sed treating the escape character itself a part of the translation?

How is simultaneous i2c bus access handled by Linux kernel?

Posted: 16 Nov 2021 09:35 AM PST

I have two sensors with different slave addresses connected on an i2c bus in my custom board. If two different programs try to read the different sensors a same moment of time, will it lead to an contention in i2c subsystem for Linux kernel?

I tried doing that on the board with both the sensors but was not able to produce the contention. And didn't find any definitive documentation which suggest either way.

Is there any document which explains how a simultaneous read call to different slave address on a given i2c bus is handled by Linux kernel?

Read file from user input with a list of prefixes, then call file with prefixes in while loops

Posted: 16 Nov 2021 10:48 AM PST

I am trying to direct user input file into while loop, but kept on failing when ran the script.

The user input file genelist contained a list of numbers where I have been using as prefixes of my other files. Eg. 012.laln, 012.model.

genelist:

012  013  025  039  109  .  .  .  

This is the script I have been testing on.

#!/usr/bin/env bash  read -pr "genefile: " genelist  read -pr "treefile: " trees  read -pr "workers: " workers    while read -r i; do      while read -r j; do          raxml-ng --sitelh --msa "$i".laln --model "$j".model --tree "${trees}" --workers "${workers}" --prefix "$i"-rT;      done < "$genelist".model;   done < "$genelist"  

In order to execute raxml-ng tool, I need to input files for --msa, --model, --tree, --workers and --prefix for output file name. I need to repeat the process with multiple files, each 012.laln need to match with 012.model and generate output file named 012-rT. The input files for tree and workers are the same for all the files.

I kept on getting error:

line 2: read: `genefile: ': not a valid identifier  line 3: read: `treefile: ': not a valid identifier  line 4: read: `workers: ': not a valid identifier  

Modifying the way I call the user input file "genelist" in a few ways but with no avail.

while read -r "${genelist}" ...   while read -r "${genelist{@}}" ...  while read -r "{genelist}" ...  

Before this, I have been using for loops, i.e., the one-liner below. It worked well. I would like to try on while loops if possible.

for i in $(cat genelist); do for j in $(cat $i.model); do raxml-ng --sitelh --msa $i.laln.trgc38_1l --model $j --tree trees --workers 4 --prefix $i-rT; done; done  

Questions: What is the correct and neat way to call the user input file genelist into the while loops?

There are some example I found in here but those are using numbers/number sequences in the loops. The answers suggested using C in for/while loops to solve the issue. But that doesn't seemed to be relevant for my case.

Meanwhile, any better alternative for for/while loops in this case is also welcome!

Fine-grained restriction of remote sudo execution

Posted: 16 Nov 2021 09:21 AM PST

My customer has many important scripts in their sudoers file, and they want to be able to deny the execution of these scripts if the user is logged in remotely.

I have not figured out a good way to go about this, as it seems like most paths I could go down only restrict by things like users, TTY, remote hostname, IP, etc., but not by something as fine-grained as restricting by script and remote vs. local.

They want there to be some concept of a blacklist of scripts, these scripts being the ones that can't be executed remotely (or visa versa w/ a whitelist). I also don't want to block the use of sudo for reasons other than running special scripts.

My problem involves creating a process that catches when someone calls `sudo ``, checks if the user is remote or local, and checks the blacklist for that script, allowing the script to run with privilege once it's confirmed that either the user is local or the user is remote and the script doesn't exist in the blacklist (but does exist in sudoers), and denying execution otherwise.

Is this feasible in any way?

They might want something that can't be done. I've been looking at custom PAM modules. The only issue I see is getting that script from stdin/command line once sudo is called.

I also want to look at SELinux as a possible solution, but I do not know much about it. I'm wondering if I need to restructure what they want and find a different solution entirely, or if this could work somehow.

Does it make more sense to have different groups of privileged/unprivileged/remote/local users to have access to these scripts?

This is a large distributed system and I don't think they like the idea of creating more users.

How to search for a string in a very large file with very long lines?

Posted: 16 Nov 2021 11:07 AM PST

I have a very large file (over 100 GB) with very long lines (can't even fit in 8 GB RAM) and I want to search it for a string. I know grep can't do it because grep tries to put entire lines into memory.

So far the best solution I've come up with is:

awk '/search-string-here/{print "Found."}' large-file-with-long-lines.txt  

I'm actually happy with this solution, but I'm just wondering if there is some more intuitive way to do it. Maybe some other implementation of grep?

Search and replace lines AFTER a regex match using "sed"

Posted: 16 Nov 2021 09:11 AM PST

This kind of feels like it would be easier in awk, but I'm curious if sed can do it. Here is my input:

line 1  line 2  line 3  line 1  line 2  line 3  line 1  line 2  line 3  

I'd like to write an in-place regex that finds the second line 1, then replaces all line 3's found after that. The output would look like this:

line 1  line 2  line 3  line 1  line 2  replaced  line 1  line 2  replaced  

I'm not really looking for "clever" solutions that only apply to this input. I want to learn if there is a general-purpose way to search and replace after a match with sed.

I thought the solution would be somewhere in the addr documentation, but it doesn't seem to describe /starting point/,s/... as something you can do, and I'm getting error when I attempt to do so.

How to restore a broken sudoers file without being able to use sudo?

Posted: 16 Nov 2021 10:20 AM PST

I'm getting the following error from sudo:

$ sudo ls  sudo: /etc/sudoers is owned by uid 1000, should be 0  sudo: no valid sudoers sources found, quitting  sudo: unable to initialize policy plugin  

Of course I can't chown it back to root without using sudo. We don't have a password on the root account either.

I honestly don't know how the system got into this mess, but now it's up to me to resolve it.

Normally I would boot into recovery mode, but the system is remote and only accessible over a VPN while booted normally. For the same reason, booting from a live CD or USB stick is also impractical.

The system is Ubuntu 16.04 (beyond EOL, don't ask), but the question and answers are probably more general.

rsync not working with Linux Mint 20 as remote

Posted: 16 Nov 2021 10:40 AM PST

I'm using rsync for years and have never come upon such a strange issue: everything works fine, unless the remote machine is running Linux Mint 20 (tried with 2 of those, one is still running 20.1 and the other is a fresh & clean install of 20.2). Whether for a single file, an entire directory or just listing resources: rsync hangs immediately after negotiation (no matter whether I "push" or "pull"). The very same commands work fine if the remote is a Debian, Armbian, even a Mint 18 (I just switched on an old Laptop to check). Even a simple thing as

rsync remote:/path/to/file .  

hangs. I've experimented with available debug options and saw all goes fine up to the authentication (either via SSH key or password). If I present a wrong password, I get the proper "exit". But if I provide the correct password/key, immediately after authentication the session hangs with no more clues given. The last thing I see is exec request accepted. Using ps on the remote machine shows the corresponding rsync processes have been spawned (and yes, before you ask: connecting to the machine via SSH works fine, even scp does its job as expected – just rsync does not).

As a last resort and work-around I've started a temporary rsyncd on the remote machine:

rsync --config=/tmp/rsyncd.conf --daemon --no-detach  

and then used rsync remote::share/path/to/file. While this worked and I got the current task done, I don't want to repeat that everytime I need to sync something.


Edits:

As one might assume some output from remote .bashrc might intervene: ssh remotehost /bin/true > out.dat (as the man page suggests for checking) results in a zero-byte file, so this should not be the cause.

straceing the 3 remotely spawned rsync processes (which btw all have the same command line, one with and two without a leading bash) shows 2 of them ending with wait4(-1,, and the other with {tv_sec=32, tv_usec=23154}. The "client side" (where rsync is invoked by the user) shows, as expected, also a wait4(-1,, as it most likely waits for the remote side to respond.


Any ideas what the culprit might be, how to solve the issue, or even how to further narrow down? As for debugging, I've used rsync --debug=all4 -avve "ssh -vvv" … already, which is how I got as far as described.


For reference, the rsyncd.conf used in above mentioned work-around:

use chroot = true  hosts allow = 192.168.0.0/24    transfer logging = true  log file = /tmp/rsyncd.log  log format = %h %o %f %l %b    [share]  comment = Share  path = /mnt/share  read only = no  list = yes  uid = nobody  gid = nogroup

Bash command-history stopped working

Posted: 16 Nov 2021 09:57 AM PST

For some reason there are no more new entries in my bash_history file and executing history doesn't return anything. The owner of the history file is correct and has read, write access. I've tried

set -o history  

but it didn't help either.

Does anybody know what might have triggered this behavior and how to re-enable history from this point?

EDIT:: here are a few useful details

$ echo $HISTFILESIZE  -1  $ echo $HISTSIZE  -1  

Problem while maximizing Google Chrome in Xubuntu 20.04

Posted: 16 Nov 2021 11:13 AM PST

I'm using Xubuntu 20.04. When I am trying to maximize google chrome, the maximize button, minimize button and the close button get above the visible screen area. enter image description here

Unable to change files/folder attributes on NAS sharing through samba server

Posted: 16 Nov 2021 10:41 AM PST

My situation:

  • Client (Windows 10) -> Server (Debian 10 / Samba Version 4.9.5-Debian) -> NAS (Lenovo ix2)

  • From Client (Windows 10), I can create, rename, delete, change files and folders without any problem but I cannot retrieve DOS/Windows Files Attribute information (read-write, Hide, System) or set it !

  • Windows 10 have SMBv1 enabled

  • Server Debian is updated to 10.10 from 8 (new hardware machine, clean installation, from scratch) ; with version 8.0 the problem was not there.

  • Server mount - via fstab, but is the same with prompt - NAS share with:

    //NAS/STORAGE  /mnt/STORAGE  cifs  username=...,password=...,rw,dir_mode=0777,file_mode=0666,uid=...,gid=...,noauto,noserverino,nounix,vers=1.0  

    I have tried vers=2.0, vers=3.0, and many other... and omit option, but nothing

  • Server (Debian 10) mount (on "/mnt" path subfolder) many other windows sharing (Window 10, Windows 7, Windows NT 4.0) and I haven't any problem, only on NAS (Lenovo ix2) not working properly.

  • smb.conf:

    [global]  workgroup = WORKGROUP  # *** I have tried many parameter for protocol:  #client min protocol = SMB2  #server min protocol = SMB2  #client max protocol = NT1  #server max protocol = NT1  #max protocol = NT1  interfaces = 127.0.0.0/8 enp11s0f0 enp11s0f1 10.9.8.1  bind interfaces only = yes  log file = /var/log/samba/log.%m  max log size = 1000  server role = standalone server  netbios name = PIGRECO  server string = Distributed File Server - Samba %v (%h)  interfaces = lo enp11s0f0 enp11s0f1  local master = yes  domain master = yes  preferred master = yes  os level = 35  encrypt passwords = yes  smb passwd file = /etc/samba/smbpasswd  guest account = studio  ldap ssl = no  client lanman auth = yes  client plaintext auth = yes  wins support = yes  dfree command = /usr/local/bin/dfree    [COMMON]  comment = Common PIGRECO Archive  path = /mnt  force user = studio  read only = No  create mask = 0777  directory mask = 0777  guest ok = Yes  hosts allow = 192.168.0. 172.0.0. 10.9.8.  strict locking = No  browsable = Yes  # The "ea support" set to "no" don't solve the problem:  #ea support = no  vfs objects = recycle  recycle:repository = /mnt/STORAGE/Trash  recycle:keeptree = Yes  recycle:versions = Yes  recycle:maxsize = 104857600  
  • I have tried the CIFS debug (with echo 7 > /proc/fs/cifs/cifsFYI ) and get this error:

    Status code returned 0xc000004f NT_STATUS_EAS_NOT_SUPPORTED  

    but i cannot find any information online.

  • In /mnt I have created new sub-folder for NAS, like other folder (permission and owner) but only read/write file creation permission is influenced when I change linux attrib/owner.

Any suggestions?

Multiple dummy monitors on remote headless Linux for VNC to local multiple monitors

Posted: 16 Nov 2021 11:23 AM PST

I submit defeat. I have been trying to configure my remote Linux box to have two dummy monitors so that I can use multiple local monitors to VNC into it. I'm surprised that no one else has needed help with this to find something on the web.

I've also tried creating a monitor on the Linux box that is double wide. Then use x11vnc to -clip an area for each display. But I'm having issues creating a monitor that large with the dummy driver.

I do have a graphics adapter installed that has two DisplayPorts but am not planning to use it. When I was using the real adapter, I was getting sluggish behavior. When I tried the dummy, it was very responsive. So I'm hoping to just create another dummy.

I'm using KDE DM.

I have seen many examples of using VIRTUAL1 but I can't get that working with the dummy driver. I tried adding Option "VirtualHeads" "2" into the config but the dummy driver doesn't recognize it.

I've seen suggestions of using Xvfb but it has been deprecated by the dummy driver since 2016.

Here are some details.

$ uname -a  Linux bgrupczy-linux 5.8.0-53-generic #60~20.04.1-Ubuntu SMP Thu May 6 09:52:46 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux  

/usr/share/X11/xorg.conf.d/xorg.conf is empty

The following was gleaned from searching the internet. Once I got it working I stopped tweaking it so it may have flaws but that don't seem to affect me.

/usr/share/X11/xorg.conf.d/dummy-1920x1080.conf has the following which gets me my single 1920x1080.

Section "Monitor"    Identifier "Monitor0"    HorizSync 28.0-80.0    VertRefresh 48.0-75.0    Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync  EndSection    Section "Device"    Identifier "Card0"    Driver "dummy"    VideoRam 256000  EndSection    Section "Screen"    Identifier "Screen0"    Device "Card0"    Monitor "Monitor0"    DefaultDepth 24    SubSection "Display"      Depth 24      Modes "1920x1080_60.00"    EndSubSection  EndSection  
$ xrandr  xrandr: Failed to get size of gamma for output default  Screen 0: minimum 320 x 240, current 1920 x 1080, maximum 1920 x 1080  default connected 1920x1080+0+0 0mm x 0mm     1920x1080     60.00*      1680x1050     70.00    60.00       1400x1050     70.00    60.00       1600x900      60.00       1280x1024     75.00    60.00       1440x900      60.00       1400x900      60.00       1280x960      60.00       1368x768      60.00       1360x768      60.00       1280x800      60.00       1152x864      75.00    70.00    60.00       1280x720      60.00       1024x768      75.00    70.00    60.00       1024x576      60.00       960x600       60.00       832x624       75.00       960x540       60.00       800x600       75.00    72.00    60.00    56.00       840x525       70.00    60.00       864x486       60.00       700x525       70.00    60.00       800x450       60.00       640x512       75.00    60.00       720x450       60.00       700x450       60.00       640x480       75.00    73.00    60.00       684x384       60.00       680x384       60.00       640x400       60.00       576x432       75.00    70.00    60.00       640x360       60.00       512x384       75.00    70.00    60.00       512x288       60.00       416x312       75.00       480x270       60.00       400x300       75.00    72.00    60.00    56.00       432x243       60.00       320x240       75.00    73.00    60.00    
$ lspci | grep VGA  00:02.0 VGA compatible controller: Intel Corporation Device 9bc5 (rev 05)  

I can see from xrandr that my monitor is called "default". So I tried this:

$ cvt 3840 1080  # 3840x1080 59.96 Hz (CVT) hsync: 67.16 kHz; pclk: 346.00 MHz  Modeline "3840x1080_60.00"  346.00  3840 4088 4496 5152  1080 1083 1093 1120 -hsync +vsync  
$ xrandr --newmode "3840x1080_60.00"  346.00  3840 4088 4496 5152  1080 1083 1093 1120 -hsync +vsync  $ xrandr --addmode default "3840x1080_60.00"  $ xrandr --output default --mode "3840x1080_60.00"  

The last line gives me:

xrandr: Configure crtc 0 failed  

I tried to increase VideoRam to 512000 (double) to make sure I had room. Not sure what to do there. I have 32G ram.

And ALL xrandr commands give me: xrandr: Failed to get size of gamma for output default and I figure that's because it's a dummy monitor that has no gamma.

I'm currently Working within the VNC session. Do I need to shut down x11vnc to get xrandx to complete? I'm at my wits end.

I'm starting x11vnc like this:

x11vnc -loop -forever -shared -repeat -noxdamage -xrandr -display :0 -clip 1920x1080+0+0  

The linux box is within my local net so I'm not concerned with authentication/passwords.

Edit 2021-05-27:

More lurking and I found some options. I was able to get a double wide screen and then create two x11vnc instances. But this isn't optimal. The Linux box still sees this as a single screen. Now is there a way to take that screen and tell the Linux box to split it? For example, if I maximize a window in KDE it will not span both local screens?

Xorg conf file:

Section "Monitor"    Identifier "Monitor0"  EndSection    Section "Device"    Identifier "Card0"    Driver "dummy"    VideoRam 512000  EndSection    Section "Screen"    Identifier "Screen0"    Device "Card0"    Monitor "Monitor0"    DefaultDepth 24    SubSection "Display"      Depth 24      Virtual 3840 1080    EndSubSection  EndSection  
x11vnc -loop -forever -shared -repeat -noxdamage -xrandr -display :0 -rfbport 5900 -clip 1920x1080+0+0  x11vnc -loop -forever -shared -repeat -noxdamage -xrandr -display :0 -rfbport 5901 -clip 1920x1080+1920+0  

I can then connect to VNC displays :0 and :1 and arrange them on local monitors and resize the windows to fit those monitors.

When a dialog window appears, many times it's in the middle of the Linux "big screen" which for me spans both monitors...

Edit 2021-11-12:

One solution: https://superuser.com/a/1188573/514658

The real issue is not being able to get my VNC viewer to go full screen and only span two of my three monitors. The only option available in the viewer is to use ALL monitors which then blocks my use of the Windows side. DisplayFusion at above link did the trick. Now I can seamlessly drag between my two monitors and not get stuck on the edge of one.

Edit 2021-11-16:

"SOLVED": Instead of using DisplayFusion, which is a big hammer, more searching led me to Windows PowerShell. No need for a third party utility. I only need to move/resize a specific window to fit on two of my three monitors full screen. Here it is:

Add-Type @"    using System;    using System.Runtime.InteropServices;      public class Win32 {      [DllImport("user32.dll")]      [return: MarshalAs(UnmanagedType.Bool)]      public static extern bool GetWindowRect(IntPtr hWnd, out RECT lpRect);        [DllImport("user32.dll")]      [return: MarshalAs(UnmanagedType.Bool)]      public static extern bool GetClientRect(IntPtr hWnd, out RECT lpRect);        [DllImport("user32.dll")]      [return: MarshalAs(UnmanagedType.Bool)]      public static extern bool MoveWindow(IntPtr hWnd, int X, int Y, int nWidth, int nHeight, bool bRepaint);    }      public struct RECT    {      public int Left;        // x position of upper-left corner      public int Top;         // y position of upper-left corner      public int Right;       // x position of lower-right corner      public int Bottom;      // y position of lower-right corner    }    "@    $h = (Get-Process vnc-E4_6_3-x86_win32_viewer).MainWindowHandle  [Win32]::MoveWindow($h, 1920, 0, 3840, 1080, $true )  

Increase System Tray Icon Size in KDE

Posted: 16 Nov 2021 10:51 AM PST

How can I increase the size of these System Tray Icons? :

enter image description here

Update: In Debian 11, KDE Plasma 5.20.5, there is now an option to scale system-tray icons to the panel's height. Here's is a short video showing that. Thanks goes to KDE!

extundelete - How to solve 'Block bitmap checksum does not match bitmap when trying to examine filesystem'?

Posted: 16 Nov 2021 11:04 AM PST

The OS is Ubuntu 17.10 and I've been trying to recover(undelete) with extundelete. (The File System is ext4.)

enter image description here

This didn't work. So, I tried with

extundelete /dev/mapper/ubuntu--vg-root --restore-file /home/chan/origol/routes/user.js  

And It worked.

However, I got another problem.

Loading filesystem metadata ... extundelete: Block bitmap checksum does not match bitmap when trying to examine filesystem  

I couldn't find any information about it. How can I solve this problem?

No usable default provider could be found for your system (VM not recognized)

Posted: 16 Nov 2021 09:01 AM PST

I am having problems with Vagrant,it does not recognize Oracle VM

When I try:

$ vagrant up    No usable default provider could be found for your system.    Vagrant relies on interactions with 3rd party systems, known as  "providers", to provide Vagrant with resources to run development  environments. Examples are VirtualBox, VMware, Hyper-V.  

CLI shows my VirtualBox version:

$ vboxmanage --version    5.2.2r119230  

And Vagrant version:

$ vagrant version    Installed Version: 1.9.1  Latest Version: 2.0.1  

What is wrong with my settings?

Command to view which version of NixOS my machine is running?

Posted: 16 Nov 2021 11:37 AM PST

At https://nixos.org/ I can view the recent releases of NixOS.

Is there a command I can run to see which version is on my machine?

Cannot boot into Linux off live USB

Posted: 16 Nov 2021 10:07 AM PST

I am trying to dual boot Linux on my laptop (Dell XPS 15) which is running Windows 10 Pro. I did not have any problem dual booting the two operating systems on my desktop.

I cannot boot up ANY Linux Distro, I have tried Mint, Ubuntu and Elementary OS. Whenever I try to boot from a live USB, I get the this message on all attempts:

GNU GRUB version 2.02~beta2-9ubuntu1

Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists are possible device or file completions.

grub>

When I type in "boot" I get the error: you need to load the kernel first.

Secure boot is disabled, and I have tried both in legacy and UEFI BIOS mode. I have never come across this error before, what causes it?

How to run a shell script (bash) on a chromebook?

Posted: 16 Nov 2021 09:38 AM PST

I have a shell script and I need to run it in order to connect to a remote ubuntu linux machine having encrypted HDD, in order to enter the passphrase. There's a shell script for this, but I have a chromebook most of the time when away. Can this be done from a chromebook?

I have already seen chrome extensions like mosh.

Find all occurrences in a file with sed

Posted: 16 Nov 2021 11:13 AM PST

Using OPEN STEP 4.2 OS... I am currently using the following sed Command:

sed -n '1,/141.299.99.1/p' TESTFILE | tail -3  

This command will find one instance in a file with the IP of 141.299.99.1 and also include 3 lines before it which is all good, with the exception that I would also like to find all the instances of the IP and the 3 lines before it and not just the first.

How can I use Unix to rename all html files by their titles?

Posted: 16 Nov 2021 10:16 AM PST

As in, rename all HTML files in a directory by the text contained in TEXT?

Could a combination of grep, sed, and mv work?

For example, I have a file contained 1.html. The title of 1.html is contained in the HTML file as TEXT (it is contained within the title tags TEXT. I would like to rename 1.html to TEXT.html

If a file is named as 5.html, and the title of 5.html is TEST2, then I want to rename 5.html to TEST2.html.

Stay at same working directory when changing to sudo

Posted: 16 Nov 2021 11:00 AM PST

When working on the command line, I often change to sudo using sudo -i. However, my working directory changes automatically to /root. I never want to go there; I want to stay where I was! How can I achieve this?

No comments:

Post a Comment