Friday, April 15, 2022

Recent Questions - Unix & Linux Stack Exchange

Recent Questions - Unix & Linux Stack Exchange


`ulimit` a shell, not a user

Posted: 15 Apr 2022 02:07 AM PDT

We're multiple user to work on a same account

I want to limit my activities to not hang the computer, to not bother the other users. I want to limit my resources' usage to not use it all (because if a process can, it uses it all)

There are 2 approaches, the priority one and strict limit one. Problem is, Linux don't manage very well priorities, meaning you'll slow down other activities even with the worse priority. So it let the strict limit

But I want to limit only my shell, not the account that many person uses

Do you have suggestions regarding priorities or strict limit?

How to use an inferior when input rewriting in GUIX?

Posted: 15 Apr 2022 01:42 AM PDT

The GUIX Inferiors manual states that

Thus you can insert an inferior package pretty much anywhere you would insert a regular package: in manifests, in the packages field of your operating-system declaration, and so on.

However, I cannot figure out how to use an Inferior when rewriting the input of a package. E.g. this manifest does not work for me:

(use-modules (guix inferior) (guix channels) (guix packages)               (srfi srfi-1))  (use-package-modules docker)  (use-package-modules python-xyz)    (define mychannels    (list (channel           (name 'guix)           (url "https://git.savannah.gnu.org/git/guix.git")           ;; Last commit that still has python-pyyaml 5.4.1.           (commit            "d3e1a94391a838332b0565d56762a58cf87ac6b1"))))    (define myinferior    (inferior-for-channels mychannels))    (define pyyaml5    (first (lookup-inferior-packages myinferior "python-pyyaml")))      (define pyyaml5-instead-of-pyyaml6    ;; This is a procedure to replace pyyaml 6.0 by pyyaml 5.4.1.      ;; The line below does not work, raises this error:    ;; In procedure package-properties: Wrong type argument:    ;; #<inferior-package python-pyyaml@5.4.1 7f42653831b0>    (package-input-rewriting `((,python-pyyaml . , pyyaml5)))      ;; The line below does work (and has a similar result).    ;(package-input-rewriting `((,python-pyyaml . , python-pyyaml-for-awscli)))    )    (define docker-compose-with-pyyaml5    (pyyaml5-instead-of-pyyaml6 docker-compose))    (packages->manifest   (list pyyaml5         (specification->package "python")         docker-compose-with-pyyaml5         ))  

docker-compose only works with python-pyyaml 5.4.1 and the version in the channel has been upgraded to 6.0. The rewriting that I'm therefore trying to do is to rewrite the input to docker-compose to use python-pyyaml 5.4.1 from an earlier version of the channel. However, my attempts fail with

Backtrace:  In guix/packages.scm:    1269:17 19 (supported-package? #<package docker-compose@1.29.2 gu…> …)  In guix/memoization.scm:      101:0 18 (_ #<hash-table 7fa05e6274c0 152/223> #<package docker…> …)  In guix/packages.scm:    1247:37 17 (_)    1507:16 16 (package->bag _ _ _ #:graft? _)    1608:48 15 (thunk)    1403:25 14 (inputs _)  In srfi/srfi-1.scm:     586:29 13 (map1 (("python-cached-property" #<package python-…>) …))     586:29 12 (map1 (("python-distro" #<package python-distro@1.…>) …))     586:29 11 (map1 (("python-docker" #<package python-docker@5.…>) …))     586:29 10 (map1 (("python-dockerpty" #<package python-docker…>) …))     586:29  9 (map1 (("python-docopt" #<package python-docopt@0.…>) …))     586:29  8 (map1 (("python-dotenv" #<package python-dotenv@0.…>) …))     586:29  7 (map1 (("python-jsonschema" #<package python-jsons…>) …))     586:17  6 (map1 (("python-pyyaml" #<package python-pyyaml@6.…>) …))  In guix/packages.scm:    1360:20  5 (rewrite ("python-pyyaml" #<package python-pyyaml@6.0…>))  In guix/memoization.scm:      101:0  4 (_ #<hash-table 7fa06ac0b540 8/31> #<package python-py…> …)  In guix/packages.scm:    1377:22  3 (_)    1435:37  2 (loop #<inferior-package python-pyyaml@5.4.1 7fa06304b3…>)  In ice-9/boot-9.scm:    1685:16  1 (raise-exception _ #:continuable? _)    1685:16  0 (raise-exception _ #:continuable? _)    ice-9/boot-9.scm:1685:16: In procedure raise-exception:  In procedure package-properties: Wrong type argument: #<inferior-package python-pyyaml@5.4.1 7fa06304b3f0>  

How can I do this rewriting of input with an inferior?

Apparently there now is a python-pyyaml 5.4.1 in the channel, called python-pyyaml-for-awscli. Rewriting the input of docker-compose with that package does work as expected, so as far as I can see I'm using the correct syntax when rewriting input. (I'm not sure what the backtick, the dot, and the commas do, maybe there is a mistake there.)

(As for the XY-problem, I can now run docker-compose using python-pyaml-for-awscli, however I'm still interested in how to use the Inferior, because next there might not be such a package available.)

During zpool scrub, does estimated completion time have a bug shows 0.01% at 10%?

Posted: 15 Apr 2022 01:15 AM PDT

I have a raidz2 on a quad of 250G drives in a USB enclosure on linux. Purely for backups. In fact I'm aware at least one drive has issues so it's a fun experiment. Naturally, it has started to hiccup - the drive parks and re-powers the spindle then it's back. Always I can repair/clear the write errors, but while scrub is running on the 673 GB volume, I see 0.01% just now 10% the way through I believe now this is a bug as it is out by exactly factor x100. I guess I could file this in the tracker.

scan: scrub in progress since Fri Apr 15 19:53:20 2022  90.1G scanned at 88.7M/s, 59.9M issued at 58.9K/s, 421G total  7.50K repaired, 0.01% done, no estimated completion time    NAME       USED  AVAIL     REFER  MOUNTPOINT  scratchy   318G   355G      318G  /mnt/scratchy  

Multiple ExecStartPre in systemd unit override files. Does systemd guarantee execution order?

Posted: 14 Apr 2022 11:09 PM PDT

From various sources, it seems like having ExecStartPre in an override conf file is executed in order and after the main service file.

But is this guaranteed by systemd itself? Also what happens if the ExecStartPre is a nohup of some sort? Or a long running process?

sudoers - when command is run as a specific user

Posted: 14 Apr 2022 11:53 PM PDT

I wish to keep certain environment variables when a certain command is run as a certain user under sudo. man sudoers seems to suggest that Defaults can do this, if I've read the paragraph copied below correctly (see highlighted part), but the syntax spec beneath it doesn't seem to match that (unless it's the Runas portion?) and I have found no examples. Is it possible? My current, failing attempt is:

/etc/sudoers.d/certain:4:23: syntax error  Defaults:certain-user!/certain-command.sh env_keep += "ENV_VAR1 ENV_VAR2"                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  

I've tried variations of this (adding spaces, changing the command etc) but to no avail.

The paragraph I mentioned:

Defaults
Certain configuration options may be changed from their default values at run-time via one or more Default_Entry lines. These may affect all users on any host, all users on a specific host, a specific user, a specific command, or commands being run as a specific user. Note that per-command entries may not include command line arguments. If you need to specify arguments, define a Cmnd_Alias and reference that instead.

Default_Type ::= 'Defaults' |                     'Defaults' '@' Host_List |                     'Defaults' ':' User_List |                     'Defaults' '!' Cmnd_List |                     'Defaults' '>' Runas_List  

Any help or insight would be much appreciated.

Is it secure to copy packages directly to /var/cache/apt/archives/?

Posted: 14 Apr 2022 11:09 PM PDT

After a fresh install, instead of downloading some .deb packages from a Debian mirror, I would like to copy the files directly from thumb drive to /var/cache/apt/archives/

Would it offer the same security guarantees (in case the files are corrupted) ? In other words, are the files hashes checked even when taken from cache, or only just after a finished download ?

Should I use a specific chmod / chown for the cached .deb ?

Apt 2.2.4 / Debian 11

How to display process title by ps?

Posted: 14 Apr 2022 09:23 PM PDT

As per setproctitle(3), the process title appears on the ps command. But after looking up the ps(1), I still have no idea how to display it by ps.

How does the regular expression [\\\/][^\\\/]*$ work?

Posted: 14 Apr 2022 11:01 PM PDT

I've some notes of useful regular expressions and one that I always use is the following:

echo '/home/user/folder/file.txt' | sed -E 's/[\\\/][^\\\/]*$//g'  

The result that I get from this regular expression is the path of the parent folder /home/user/folder. I understand the basics of regular expressions with:

\s          # all white space  \S          # no white space  .           # all chars         \.          # period  +           # sequence of once or more  {5}         # sequence of delimited interval   *           # sequence of zero or more  ?           # sequence of once or none  [0-9]       # any sequence of number  [a-z]       # any sequence of letter   [^x-y]      # no sequence of letter   ^           # beginning  $           # ending  

However, I haven't managed to figure out what is the meaning of [\\\/] and [^\\\/] in the case of the regular expression from my example. How does it work?

Samba AD DC DNS Directory Service error

Posted: 15 Apr 2022 02:13 AM PDT

Clients can log in, pull the cached gpos, only the stored credentials for mapped drives are lacking. With the RSAT tools, of course, no more connection. Im really confused at this point because this is the result of a problem with the resolv.conf file. i fixed the resolv.conf but the directory services seem to have said goodbye.

So:

  • Router 192.168.0.1
  • DC1 192.168.0.10
  • DC2 192.168.0.100

  • Krb5.conf is correct
  • Ping IP from dc to dc goes through vice versa
  • When pinging the hostname, both get dc0%.my.domain directly at the end
  • Hostname ping from client outside domain = ipv6 response
  • Hostname ping from client inside domain = sluggish ipv6 response
  • ping IP from client both non and dom member to Dcs= going through

resolv.conf

nameserver 192.168.0.100  nameserver 127.0.0.1  search MY  

ip addr

eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether 0c:c4:7a:XX:XX:XX brd ff:ff:ff:ff:ff:ff      altname enp1s0      inet 192.168.0.10/24 brd 192.168.0.255 scope global eno1         valid_lft forever preferred_lft forever      inet6 fe80::%%%%%%%/64 scope link         valid_lft forever preferred_lft forever  

Netzwerk config

network:      version: 2      renderer: networkd      ethernets:          eno1:              addresses:                  - 192.168.0.10/24              nameservers:                  addresses: [192.168.0.10, 192.168.0.100]              routes:                  - to: default                    via: 192.168.0.1  

resolvectl

Global           Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported    resolv.conf mode: foreign  Current DNS Server: 192.168.0.1         DNS Servers: 192.168.0.1 192.168.0.10 192.168.0.100          DNS Domain: MY    Link 2 (eno1)  Current Scopes: DNS       Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported     DNS Servers: 192.168.0.10 192.168.0.100    Link 3 (eno2)  Current Scopes: none       Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported  

smb.conf

[global]          min protocol = NT1          dns forwarder = 192.168.0.1          netbios name = DC01          realm = MY.DOMAIN          server role = active directory domain controller          workgroup = MVZ          idmap_ldb:use rfc2307 = yes            map to guest = Bad User          log file = /var/log/samba/%m          log level = 3    template shell = /bin/bash  winbind use default domain = true  winbind offline logon = false  winbind nss info = rfc2307            winbind enum users = yes          winbind enum groups = yes    [sysvol]          path = /var/lib/samba/sysvol          read only = No    [netlogon]          path = /var/lib/samba/sysvol/my.domain/scripts          read only = No  ....  

The Logs are pretty confusing to me The only one containing current logs is m%

    [2022/04/14 14:51:46.110762,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate: Traceback (most recent call last):  [2022/04/14 14:51:46.110843,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/sbin/samba_dnsupdate", line 298, in check_d>  [2022/04/14 14:51:46.110877,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     ans = check_one_dns_name(normalised_name, d.type, d)  [2022/04/14 14:51:46.110895,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/sbin/samba_dnsupdate", line 275, in check_o>  [2022/04/14 14:51:46.110912,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     return resolver.resolve(name, name_type)  [2022/04/14 14:51:46.110928,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/lib/python3/dist-packages/dns/resolver.py",>  [2022/04/14 14:51:46.110998,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     timeout = self._compute_timeout(start, lifetime)  [2022/04/14 14:51:46.111020,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/lib/python3/dist-packages/dns/resolver.py",>  [2022/04/14 14:51:46.111106,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     raise Timeout(timeout=duration)  [2022/04/14 14:51:46.111130,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate: dns.exception.Timeout: The DNS operation timed out after>  [2022/04/14 14:51:46.111157,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:  [2022/04/14 14:51:46.111177,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate: During handling of the above exception, another exceptio>  [2022/04/14 14:51:46.111192,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:  [2022/04/14 14:51:46.111214,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate: Traceback (most recent call last):  [2022/04/14 14:51:46.111228,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/sbin/samba_dnsupdate", line 848, in <module>  [2022/04/14 14:51:46.111245,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     elif not check_dns_name(d):  [2022/04/14 14:51:46.111260,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:   File "/usr/sbin/samba_dnsupdate", line 300, in check_d>  [2022/04/14 14:51:46.111299,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate:     raise Exception("Timeout while waiting to contact a >  [2022/04/14 14:51:46.111318,  0] ../../lib/util/util_runcmd.c:352(samba_runcmd_io_han>    /usr/sbin/samba_dnsupdate: Exception: Timeout while waiting to contact a working DN>  [2022/04/14 14:51:46.129766,  0] ../../source4/dsdb/dns/dns_update.c:85(dnsupdate_nam>    dnsupdate_nameupdate_done: Failed DNS update with exit code 1  

Does anyone see what im missing out on ???? Are there important Infos missing ??

Why is curl, but not wget, having trust issues with software.download.prss.microsoft.com?

Posted: 14 Apr 2022 10:34 PM PDT

The following URL redirects to a microsoft.com subdomain: https://tb.rg-adguard.net/dl.php?go=3dd1ce66 Namely to https://software.download.prss.microsoft.com/db/Win10_20H2_v2_EnglishInternational_x64.iso?t=... (... being a random token)

I was able to get the final redirect URL by running:

curl -LsI -o /dev/null -w %{url_effective} "https://tb.rg-adguard.net/dl.php?go=7e583fea  

But no matter if I run wget https://tb.rg-adguard.net/dl.php?go=3dd1ce66 or wget https://software.download.prss.microsoft.com/db/Win10_20H2_v2_EnglishInternational_x64.iso?t=...................

I always get certificate errors that I don't get when downloading the file using Firefox.

wget https://software.download.prss.microsoft.com/db/Win10_20H2_v2_EnglishInternational_x64.iso\?t\=...................  --2022-04-12 14:57:29--  https://software.download.prss.microsoft.com/db/Win10_20H2_v2_EnglishInternational_x64.iso?t=..........................  Resolving software.download.prss.microsoft.com (software.download.prss.microsoft.com)... 152.199.21.175, 2606:2800:233:1cb7:261b:1f9c:2074:3c  Connecting to software.download.prss.microsoft.com (software.download.prss.microsoft.com)|152.199.21.175|:443... connected.  ERROR: The certificate of 'software.download.prss.microsoft.com' is not trusted.  

Why is the behavior not consistent across different applications (Firefox vs wget). Is there actually reason not to trust that certificate (and if so why is Firefox not catching that) or is wget at fault?

I'm using Fedora 35 x64 with Wget 1.21.2 and Firefox 98.0.

In GUIX, how to use a old version of a package, no longer in the channel?

Posted: 15 Apr 2022 01:01 AM PDT

Part of what attracted me to GUIX is that various different versions of packages can be 'installed' at the same time without interfering with each other. But I can't figure out how to actually use those different versions.

E.g. recently, the pyyaml package was upgraded from 5.4.1 to 6.0. For various reasons, I want to keep using 5.4.1. (I'm just using pyyaml as an example here.) I do have the older versions in my store:

$ ls -d1 /gnu/store/*pyyaml*  /gnu/store/22v8l25b33vs65wjd9ap28n772bvlih3-python-pyyaml-5.4.1/  /gnu/store/2j2s1jd6y8x7mlqjp968955misx1qw1c-python-pyyaml-6.0/  /gnu/store/54imz4x65s3xbjrgrfswgk815gfkhk4b-python-pyyaml-5.4.1/  /gnu/store/6537a8na1rbilffqqi642q0lipqfi2zg-python-pyyaml-5.4.1.drv  /gnu/store/6flrrmhq203vg6awdw7r2lsmzix4g2rh-python-pyyaml-6.0-guile-builder  /gnu/store/73k3qdz9rdh64pl7a0f0951zm2pbx5s2-python-pyyaml-5.4.1.drv  /gnu/store/7bcbwi93ihz8v2sdzmj6l9vhjqaxr14l-python-pyyaml-5.4.1-builder  ...  

How can I use these older versions?

It would be fine to use such an older version only in isolation. For example, I was hoping something like this could work:

$ guix shell "python-pyyaml@5.4.1" python  guix shell: error: python-pyyaml: package not found for version 5.4.1  

This error is expected, because that older version is not available in my channels. So maybe it is possible to somehow specify an older version of the channel to be used, but I cannot figure out how.


Side-node about the XY-problem, the immediate cause for this question is that docker-compose now does not work anymore:

$ guix shell docker-compose  guix shell: error: build of `/gnu/store/8qhvnw5mwra9i6ji24xlywcpdl0rdznn-docker-compose-1.29.2.drv' failed  $ zcat /var/log/guix/drvs/8q/hvnw5mwra9i6ji24xlywcpdl0rdznn-docker-compose-1.29.2.drv.gz  ...checking requirements: ERROR: docker-compose==1.29.2 ContextualVersionConflict(PyYAML 6.0 (/gnu/store/igfl4023dzvl8vi6xs1m96lcsr4fdw07-python-pyyaml-6.0/lib/python3.9/site-packages), Requirement.parse('PyYAML<6,>=3.10'), {'docker-compose'})  

However, I do not care particularly about docker-compose (w.r.t. this question). If anything, this question is part of my journey to replace that with GUIX-native tools.

(Also, I'm aware that pyyaml 6 forces some safety features on its users, so pyyaml 5 should not be used anymore; pyyaml is just used as an example.)

Use Ansible 2.12 to access AWS EC2 via host: tag class

Posted: 14 Apr 2022 08:17 PM PDT

On my local hardware, I have a Vagrant box running Ubuntu 20, on which I'm using Ansible 2.12.2

I am able to access AWS and even create an EC2 instance within a VPN.

When I view inventory, I can see the EC2 server as:

"ec2-64-135-69-12.us-west-1.compute.amazonaws.com": {      ...,      "tags": {          "Details": "File server and api",          "Name": "File server via Ansible",          "OS": "Ubuntu20",          "Type": "Image Server",          "class": "classfileserver2022"      },      ...  },  

In my next playbook, I can access the server via

hosts: "ec2-64-135-69-12.us-west-1.compute.amazonaws.com"  

But I would prefer to access it by any of the tags in the json above.

I have tried

hosts: "tags_class_classfileserver2022"  

and

hosts:    - tags:Class="classfileserver2022"  

but I get errors like

[WARNING]: Could not match supplied host pattern, ignoring: tags_class_classfileserver2022  skipping: no hosts matched  

How do I reach EC2 hosts using class tags? (or any other tag..)

My playbook is as follows:

---    - name: "Prepare base of {{ server_name }} box"      vars_files:        - vars/0000_vars.yml        - vars/vars_for_base_provision.yml        - vars/vars_for_geerling.security.yml  #    hosts: "ec2-54-153-39-10.us-west-1.compute.amazonaws.com"   <-- this works      hosts: "tags_Class_{{ tag_class }}"      remote_user: ubuntu      become: yes      gather_facts: no        pre_tasks:      - name: Check for single host        fail: msg="Single host check failed.  Try --limit or change `hosts` above."        when: "{{ ansible_play_batch|length }} != 1"        roles:        - { role: geerlingguy.security }  

Who killed my sort? or How to efficient count distinct values from a csv column

Posted: 14 Apr 2022 10:53 PM PDT

I'm doing some processing trying to get how many different lines in a file containing 160,353,104 lines. Here is my pipeline and stderr output.

$ tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |\    sort -T. -S1G | tqdm --total=160353104 | uniq -c | sort -hr > users    100%|████████████████████████████| 160353104/160353104 [0:15:00<00:00, 178051.54it/s]   79%|██████████████████████      | 126822838/160353104 [1:16:28<20:13, 027636.40it/s]    zsh: done tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |   zsh: killed sort -T. -S1G |   zsh: done tqdm --total=160353104 | uniq -c | sort -hr > users  

My command-line PS1 or PS2 printed the return codes of all process of the pipeline. ✔ 0|0|0|KILL|0|0|0 First char is a green checkmark that means that last process returned 0 (success). Other numbers are return code for each one of pipelined processes, in same order. So I've notice that my fourth command got KILL status, this is my sort command sort -T. -S1G setting local directory to temp storage and buffer up to 1GiB.

The question is, why did it returned KILL, does it means something sent a KILL SIGN to it? Is there a way to know "who killed" it?

Updates

After reading Marcus Müller Answer, first I've tried to load the data into Sqlite.

So, maybe this is a good moment to tell you that, no, don't use a CSV-based data flow. A simple

sqlite3 place.sqlite  

and in that shell (assuming your CSV has a title row that SQLite can use to determine the columns) (of course, replace $second_column_name with the name of that column)

.import 022_place_canvas_history.csv canvas_history --csv  SELECT $second_column_name, count($second_column_name)   FROM canvas_history   GROUP BY $second_column_name;  

This was taking a lot of time, so I leave it processing and went to do other things. While it I thought more about this other paragraph from Marcus Müller Answer:

You just want to know how often each value appeared on the second column. Sorting that before just happens because your tool (uniq -c) is bad, and needs the rows to be sorted before (there's literally no good reason for that. It's just not implemented that it could hold a map of values and their frequency and increase that as they appear).

So I thought, I can implement that. When I got back into computer, my Sqlite import process had stopped cause of a SSH Broken Pip, think as it didn't transmit data for a long time it closed the connection. Ok, what a good opportunity to implement a counter using a dict/map/hashtable. So I've write the follow distinct file:

#!/usr/bin/env python3  import sys    conter = dict()    # Create a key for each distinct line and increment according it shows up.   for l in sys.stdin:      conter[l] = conter.setdefault(l, 0) + 1 # After Update2 note: don't do this, do just `couter[l] = conter.get(l, 0) + 1`    # Print entries sorting by tuple second item ( value ), in reverse order  for e in sorted(conter.items(), key=lambda i: i[1], reverse=True):      k, v = e      print(f'{v}\t{k}')  

So I've used it by the follow command pipeline.

tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 | ./distinct > users2  

It was going really really fast, projection of tqdm to less than 30 minutes, but when got into 99% it was getting slower and slower. This process was using a lot of RAM, about 1.7GIB. Machine I'm working with this data, the machine I have storage enought, is a VPS with just 2GiB RAM and ~1TiB storage. Thought it may be getting so slow cause SO was having to handle these huge memory, maybe doing some swap or other things. I've waited anyways, when it finally got into 100% in tqdm, all data was sent into ./distinct process, after some seconds got the follow output:

160353105it [30:21, 88056.97it/s]                                                                                              zsh: done       tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |   zsh: killed     ./distinct > users2  

This time mostly sure cause by out-of-memory-killer as spotted in Marcus Müller Answer TLDR section.

So I've just checked and I don't have swap enabled in this machine. Disabled it after complete its setup with dmcrypt and LVM as you may get more information in this answers of mine.

So what I'm thinking is to enable my LVM swap partition and trying to run it again. Also at some moment I think that I've seen tqdm using 10GiB of RAM. But I'm pretty sure I've seen wrongly or btop output mixed up, as latter it showed only 10MiB, don't think tqdm would use much memory as it just counts and updates some statics when reading a new \n.

In Stéphane Chazelas comment to this question they say:

The system logs will possibly tell you.

I would like to know more about it, should I find something in journalctl? If so, how to do it?

Anyways, as Marcus Müller Answer says, loading the csv into Sqlite may be by far the most smart solution, as it will allow to operate on data in may ways and probably has some smart way to import this data without out-of-memory.

But now I'm twice curious about how to find out why as process was killed, as I want to know about my sort -T. -S1G and now about my ./distinct, the last one almost sure it was about memory. So how to check for logs that says why those process were killed?

Update2

So I've enabled my SWAP partition and took Marcus Müller suggestion from this question comment. Using pythons collections.Counter. So my new code (distinct2) looks like this:

#!/usr/bin/env python3  from collections import Counter  import sys    print(Counter(sys.stdin).most_common())  

So I've run Gnu Screen to even if I get a broken pipe again I could just resume the session, than run it in the follow pipeline:

tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 --unit-scale=1 | ./distinct2 | tqdm --unit-scale=1 > users5  

That got me the follow output:

160Mit [1:07:24, 39.6kit/s]  1.00it [7:08:56, 25.7ks/it]  

As you can see it took way more time to sort the data than to count it. One other thing you may notice is that tqdm second line output shows just 1.00it, it means it got just a single line. So I've checked the user5 file using head:

head -c 150 users5   [('kgZoJz//JpfXgowLxOhcQlFYOCm8m6upa6Rpltcc63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795), ('JMlte6XKe+nnFvxcjT0hHDYYNgiDXZVOkhr6KT60EtJAGa  

As you can see, it printed the entire list of tuples in a single line. For solving this I've used the good old sed as follow sed 's/),/)\n/g' users5 > users6. After it I've checked users6 content using head, as follow with its output:

$ head users6  [('kgZoJz/...c63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795)   ('JMlte6X...0EtJAGaezxc4e/eah6JzTReWNdTH4fLueQ20A4drmfqbqsw==\n', 781)   ('LNbGhj4...apR9YeabE3sAd3Rz1MbLFT5k14j0+grrVgqYO1/6BA/jBfQ==\n', 777)   ('K54RRTU...NlENRfUyJTPJKBC47N/s2eh4iNdAKMKxa3gvL2XFqCc9AqQ==\n', 767)   ('8USqGo1...1QSbQHE5GFdC2mIK/pMEC/qF1FQH912SDim3ptEFkYPrYMQ==\n', 767)   ('DspItMb...abcd8Z1nYWWzGaFSj7UtRC0W75P7JfJ3W+4ne36EiBuo2YQ==\n', 766)   ('6QK00ig...abcfLKMUNur4cedRmY9wX4vL6bBoV/JW/Gn6TRRZAJimeLw==\n', 765)   ('VenbgVz...khkTwy/w5C6jodImdPn6bM8izTHI66HK17D4Bom33ZrwuGQ==\n', 758)   ('jjtKU98...Ias+PeaHE9vWC4g7p2KJKLBdjKvo+699EgRouCbeFjWsjKA==\n', 730)   ('VHg2OiSk...3c3cr2K8+0RW4ILyT1Bmot0bU3bOJyHRPW/w60Y5so4F1g==\n', 713)  

Good enough to work latter. Now I think I should add an update after trying to check who killed my sort using dmesg ou journalctl. I'm also wondering if there is a way to make this script faster. Maybe creating a threadpool, but have to check pythons dict behavior, also thought about other data-structures as the column I'm counting is fixed width string, maybe using a list to storage the frequency of each different user_hash. Also I read the python implementation of Counter, it's just a dict, pretty much same implementation I had before, but instead of using dict.setdefault just used dict[key] = dict.get(key, 0) + 1, it was a miss-usage of setdefault no real need for this scenario.

Update3

So I'm getting so deep in the rabbit hole, totally lost focus of my objective. I started search for faster sorting, maybe write some C or Rust, but realized that already have the data I came for processed. So I'm here to show dmesg output and one final tip about the python script. The tip is: may be better to just count using dict or Counter, than sort its output using gnu sort tool. Probably sort sorts faster than python sorted buitin function.

About dmesg, it was pretty simple to find out of memory, just did a sudo dmesg | less press G to go all way down, than ? to search back, than searched for Out string. Found two of them, one for my python script and another to my sort, the one that started this question. Here is those outputs:

[1306799.058724] Out of memory: Killed process 1611241 (sort) total-vm:1131024kB, anon-rss:1049016kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:2120kB oom_score_adj:0  [1306799.126218] oom_reaper: reaped process 1611241 (sort), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB  [1365682.908896] Out of memory: Killed process 1611945 (python3) total-vm:1965788kB, anon-rss:1859264kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3748kB oom_score_adj:0  [1365683.113366] oom_reaper: reaped process 1611945 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB  

That's it, thank you so much for helping so far, hope it help others too.

Cpu frequency is too low due to faulty battery

Posted: 15 Apr 2022 01:04 AM PDT

I have a battery which apparently is dead (I've tried to re-calibrate it with power-calibrate, didn't work).

 $ acpi  Battery 0: Charging, 0%, charging at zero rate - will never fully charge.  Battery 1: Not charging, 0%  

As a result CPU frequency is set to the lowest value

 $ grep MHz /proc/cpuinfo   cpu MHz     : 399.999  cpu MHz     : 400.064  cpu MHz     : 400.001  cpu MHz     : 400.046   $ sudo cpupower frequency-info   analyzing CPU 0:    driver: intel_pstate    CPUs which run at the same hardware frequency: 0    CPUs which need to have their frequency coordinated by software: 0    maximum transition latency:  Cannot determine or is not supported.    hardware limits: 400 MHz - 2.60 GHz    available cpufreq governors: performance powersave    current policy: frequency should be within 400 MHz and 2.60 GHz.                    The governor "performance" may decide which speed to use                    within this range.    current CPU frequency: Unable to call hardware    current CPU frequency: 400 MHz (asserted by call to kernel)    boost state support:      Supported: no      Active: no     

I've tried to set the frequency manually but it fails

 $ sudo cpupower frequency-set -f 2000  Setting cpu: 0  Error setting new values. Common errors:  - Do you have proper administration rights? (super-user?)  - Is the governor you requested available and modprobed?  - Trying to set an invalid policy?  - Trying to set a specific frequency, but userspace governor is not available,     for example because of hardware which cannot be set to a specific frequency     or because the userspace governor isn't loaded?  

Rising it with sysfs didn't work also,

 $ echo "2000000" | sudo tee -a /sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq   2000000   $ cat /sys/devices/system/cpu/cpu3/cpufreq/scaling_cur_freq   400000  

How can I disable CPU frequency scaling completely or set it to the maximum?

The problem occurs only on Linux, OpenBSD works on maximum CPU frequency

I'm also able to measure performance degradation with

dd if=/dev/zero bs=1M count=1024 | md5sum -  

where it gives me about ~50MB/s, almost 10 times less than expected.

Chromium not opening in WSL2

Posted: 14 Apr 2022 08:16 PM PDT

I am a Linux noob using the ubuntu 20.04 distro. I use Linux via WSL2 and my OS is Windows 10. I managed to install chromium but the app simply fails to open without a warning, when I try to open it using GUI. I tried using the terminal to open it and then this came up:[0103/222716.865545:ERROR:exception_handler_server.cc(361)] getsockopt: Invalid argument . I referred to this video to install Chromium: https://youtu.be/FJ-ymbDIths. But I couldn't do one step in it (Around 3:30 minutes as I couldn't find the "Softwares and Updates" window). I don't know whether that caused the issue. How to fix this? I haven't found any possible duplicates for this yet.

EDIT: As mentioned in the comments, I checked the version and found it to be version 1! I don't know why it happened though!

Ubuntu-20.04 | Running | 1

Is it possible to use lazy dynamic linking on Linux?

Posted: 15 Apr 2022 02:00 AM PDT

On UNIX, I may use a commandline like:

cc -o executable *.o -zlazyload -lsomelib  

with the result that the libraries listed to the right of -zlazyload are marked with the LAZYLOAD ELF tag in the binary. This may be verified by calling dump -Lv executable and the result contains e.g.:

  **** DYNAMIC SECTION INFORMATION ****  .dynamic:  [INDEX] Tag         Value  [1]     POSFLAG_1       LAZYLOAD  [2]     NEEDED          libsecdb.so.1  

In this case, libsecdb is not loaded at the startup time of the executable, but delayed to the time, when the first function from libsecdb is called.

This trick may be used to keep the incore representation of the executable smaller if it does not use all features.

Is there a way to achive the same on Linux? The GNU linker seems to have a flag -zlazy, but my experience is that this is without effect.

The background for this question is that on Solaris, it is simple to link the current Bourne Shell (bosh) using lazy linking and this results in a shell that is nearly as small as dash when executing shell scripts, but that is still faster than dash. The shared libraries for the interactive history editor are only linked if the shell is used in interactive mode.

Apache2 returning "APACHE_RUN_DIR" error on docker container

Posted: 14 Apr 2022 10:02 PM PDT

When trying to start my roundcube mailserver on a docker container I get the apache error:

AH00111: Config variable ${APACHE_RUN_DIR} is not defined  apache2: Syntax error on line 80 of /etc/apache2/apache2.conf:  DefaultRuntimeDir must be a valid directory, absolute or relative to   ServerRoot  

Even if I declared all the envvars in the dockerfile like:

#FROM armv7/armhf-debian  FROM debian    RUN apt-get update -y && apt-get install sudo -y  RUN sudo apt-get install nano    # install exim,d ovecot, fetchmail, roundcoube  RUN DEBIAN_FRONTEND=noninteractive apt-get install -y exim4 sudo wget ca-certificates  RUN DEBIAN_FRONTEND=noninteractive apt-get install -y dovecot-imapd  RUN DEBIAN_FRONTEND=noninteractive apt-get install -y fetchmail procmail  RUN DEBIAN_FRONTEND=noninteractive apt-get install -y apache2 php5.* php5.*-mysql    #add  RUN sudo mkdir -p /etc/php5/apache2/    # add www-data to sudoers  RUN echo "www-data ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers    # removing std. html site  RUN sudo rm /var/www/html/index.html    # downloading roundcube  RUN wget https://github.com/roundcube/roundcubemail/releases/download/1.2.3/roundcubemail-1.2.3-complete.tar.gz  RUN tar xvf roundcubemail-1.2.3-complete.tar.gz  RUN cp -rf roundcubemail-1.2.3/. /var/www/html/  RUN chown -R www-data:www-data /var/www/html/  RUN echo "MAIN_TLS_ENABLE = 1" >> /etc/exim4/exim4.conf.localmacros    # setting date.timezone  RUN echo 'date.timezone = "Europe/Berlin"' >> /etc/php5/apache2/php.ini    # enable fetchmail as daemon  RUN echo "START_DAEMON=yes" >> /etc/default/fetchmail    # let dovecot listen on ipv6  RUN echo "listen = *" >> /etc/dovecot/dovecot.conf    VOLUME ["/var/log/exim4"]    ADD ./scripts /scripts    # clean for smaller image  RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*    # entrypoint  #ENTRYPOINT ["exim"]  ENTRYPOINT /bin/bash /scripts/init.sh  #CMD [/scripts/init.sh]  

I'm starting it with an init.sh file as seen below

I also checked if the evvars and direcories exist in the docker container. The RUN_DIR is set and also /var/run/apache2 exists. Within is the apache2.pid set with an id.

When opening the localhost address the pure php code is shown..

"systemctl set-property user-1009.slice CPUQuota=50%" - Failed to set unit properties on user-1009.slice: Unit user-1009.slice is not loaded

Posted: 15 Apr 2022 01:04 AM PDT

I'm trying to set per-user limits on processes, most of them are run with sudo --user. Why do user-1001 and user-1008 on my system have the slice files, but I can't get it on 1009?

# systemctl set-property user-1009.slice CPUQuota=50%  Failed to set unit properties on user-1009.slice: Unit user-1009.slice is not loaded.    # systemctl status user-1009.slice  ● user-1009.slice     Loaded: loaded     Active: inactive (dead)  

I tried manually creating the file

# touch /etc/systemd/system/user-1009.slice    # systemctl status user-1009.slice  ● user-1009.slice     Loaded: masked (/etc/systemd/system/user-1009.slice; masked; vendor preset: disabled)     Active: inactive (dead)    # systemctl set-property user-1009.slice CPUQuota=50%  Failed to set unit properties on user-1009.slice: Unit user-1009.slice is not loaded.  

Also this doesn't make sense to me, the testprocess (PID 26668) shows in ps -U 1009, but it's running under the slice for user-1008 (because user-1008 used sudo to run it?)

# ps -U 1009 ; systemctl status user-1008.slice    PID TTY          TIME CMD  15727 pts/1    00:00:00 bash  26668 ?        00:00:00 testprocess  ● user-1008.slice - User Slice of testuser     Loaded: loaded (/run/systemd/system/user-1008.slice; static; vendor preset: disabled)    Drop-In: /run/systemd/system/user-1008.slice.d             └─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-TasksMax.conf     Active: active since Thu 2018-08-30 19:35:01 EDT; 2 days ago     CGroup: /user.slice/user-1008.slice             └─session-1801668.scope               └─26668 ./testprocess  

Searching around, all I could find is people saying to login as the user to fix this, but obviously the user has processes open. And I tried using su - user1009 in another terminal, but that didn't seem to help

How do you open the control center from Linux Mint Cinammon Desktop terminal?

Posted: 15 Apr 2022 02:17 AM PDT

How do you open the control center from Linux Mint Cinammon Desktop terminal?

Also, is it just shooter's preference for which file management is best? I don't have a problem with Nemo so far but if there's better out there, any suggestions?

Thanks for any help!

How to list the open file descriptors (and the files they refer to) in my current bash session

Posted: 15 Apr 2022 02:28 AM PDT

I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session.

Is there a way to list the currently open file descriptors?

Mounting SD card on Linux Mint => "mount: special device does not exist"

Posted: 14 Apr 2022 11:03 PM PDT

Trying to mount an SD card connected via a USB SD-card reader.

dmesg shows that the USB device is connected and the card is detected

[   84.696147] usb 1-3.2: new high-speed USB device number 7 using ehci-pci  [   84.791437] usb 1-3.2: New USB device found, idVendor=8564, idProduct=4000  [   84.791443] usb 1-3.2: New USB device strings: Mfr=3, Product=4, SerialNumber=5  [   84.791446] usb 1-3.2: Product: Transcend  [   84.791450] usb 1-3.2: Manufacturer: TS-RDF5   [   84.791452] usb 1-3.2: SerialNumber: 000000000039  [   85.060511] usb-storage 1-3.2:1.0: USB Mass Storage device detected  [   85.060953] scsi6 : usb-storage 1-3.2:1.0  [   85.061055] usbcore: registered new interface driver usb-storage  [   85.089647] usbcore: registered new interface driver uas  [   86.061604] scsi 6:0:0:0: Direct-Access     TS-RDF5  SD  Transcend    TS37 PQ: 0 ANSI: 6  [   86.061964] sd 6:0:0:0: Attached scsi generic sg2 type 0  [   86.575707] sd 6:0:0:0: [sdb] 61896704 512-byte logical blocks: (31.6 GB/29.5 GiB)  [   86.576965] sd 6:0:0:0: [sdb] Write Protect is off  [   86.576970] sd 6:0:0:0: [sdb] Mode Sense: 23 00 00 00  [   86.578223] sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA  [   86.585246]  sdb: [CUMANA/ADFS] sdb1 [ADFS] sdb1  [   86.590856] sd 6:0:0:0: [sdb] Attached SCSI removable disk  

fdisk shows that it is connected, although it complains about some invalid argument:

$ sudo fdisk -l    Disk /dev/sdb: 31.7 GB, 31691112448 bytes  64 heads, 32 sectors/track, 30223 cylinders, total 61896704 sectors  Units = sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk identifier: 0x00006f83       Device Boot      Start         End      Blocks   Id  System  /dev/sdb1            2048     2474609     1236281    e  W95 FAT16 (LBA)  /dev/sdb2         2482176    61896703    29707264   85  Linux extended  /dev/sdb5         2490368     2605055       57344    c  W95 FAT32 (LBA)  /dev/sdb6         2607104    11855871     4624384   83  Linux  /dev/sdb7        11857920    61396991    24769536   83  Linux  fdisk: unable to seek on /dev/sdb1: Invalid argument  

When trying to mount, I get the "special device does not exist" message:

$ sudo mount /dev/sdb2 /mnt -v  mount: you didn't specify a filesystem type for /dev/sdb2         I will try all types mentioned in /etc/filesystems or /proc/filesystems  Trying ext3  mount: special device /dev/sdb2 does not exist  

Any idea?

Sync two Directories without rsync

Posted: 15 Apr 2022 12:03 AM PDT

I know what you are thinking right now "Just use rsync" but unfortunately this system does not have rsync, and I would like to come up with another solution.

The Setup:

  1. The source Server has a Upload folder containing 140 000+ Files (No Sub Directories)
  2. The destination Server needs the Upload folder containing 140 000+ Files

Let's call it Migrating with only Post-It Notes... Now to complicate matters, Server 1 keeps getting new files every day, due to new Uploads or Thumbnails being generated, so that idea in your head right now "just copy the file from 1 to 2" will not work, as once I am done with the Copy of about 20+GB I can start again, as there is already new file on the source Server...

My Solution Idea

  • Copy the complete folder from source to destination
  • Find the latest create date in the destination Server and use as starting point on the source Server
  • Copy all files since the last create date from the source to the destination (delta)
  • Setup a cron to do this as often as possible.

My Problem

find /uploads/* -mtime -1 bash: /bin/find: Argument list too long

Now before I start writing a bash script loop , I was wondering if there is someone out there that could suggest another way of doing this without a bash script... good old low level CLI.

RSH giving Connectin Refused error on RHEL

Posted: 15 Apr 2022 02:06 AM PDT

I am trying to do rsh on one of my VM but getting connection refused error. I have checked /etc/hosts and /etc/resolv.conf both have proper hostname, IP and domain.

How to combine zsh autocomplete for wrapper function arguments and existing command

Posted: 14 Apr 2022 09:03 PM PDT

I work mostly in gvim and many terminals. Originally, I preferred to open all my files in a single vim instance. To that end I used an alias to open files from my terminals in the current 'vim server'.

alias rv="gvim --remote-silent"  

But having many files from multiple projects open in a single vim instance impacts my productivity, so I'm upgrading my alias to a function.

# main function  rv() {      local args options server        options=$(getopt -o hils:t: -l "help,info,list,set:,target:" -- "$@")        if [[ $? -ne 0 ]]; then          echo "Failed to parse options."          return 1      fi        # a little magic, necessary when using getopt      eval set -- "$options"        # go through the options with a case and use shift to analyze one option at a time.      while true; do          case "$1" in              -h|--help)                  echo "Usage: $0 [-hil] [--help] [--info] [--list]";                  echo "       $0 {-s | --set} <name> [<file1 file2...>]";                  echo "       $0 {-t | --target} <name>] <file1 file2...>";                  return 0;;              -i|--info)                  gvim_show_info;                  return 0;;              -l|--list)                  gvim_list_servers;                  return 0;;              -s|--set)                  gvim_set_server_name ${2:u};                  shift 2;;              -t|--target)                  server="$2";                  shift 2;;              --)                  shift;                  break;;          esac      done        if [[ "$#" -eq 0 ]]; then          # if no files specified...          if [[ -n "$server" ]]; then              # throw error if --target option was specified.              echo "Error!  --target requires one or more filenames."              return 1;          fi      else          # if files were specified...          if [[ -n "$server" ]]; then              # if --target was specified              gvim_run_remote $server "$@"          else              gvim_run_remote $(gvim_get_default_server) "$@"          fi      fi        return 0;  }  

Now this new rv has it's own options. I can use it to:

  • list available vim servers (-l --list)
  • set the default vim server for the current shell (-s --set)
  • show the default vim server (-i --info)
  • open files in a specific vim server (-t --target)
  • open files in default vim server: rv files...

However, since I'm using a function for rv instead of an alias, I lose the zsh completion I previously enjoyed. I've read up on creating a completion function, _rv, that will show rv's options, but I want to combine my completion options with the existing vim completion options. I know there may be some conflicts with rv's -s and vim's -s, but I figure I can handle that elegantly with the -- separator.

TLDR; So, how do I create a completion script that combines the _arguments options for both _rv and _vim? I prefer to reuse _vim if possible instead of copy-pasting it's arguments list into _rv.

Here's my _rv. Updated 2014/6/10 16:10

#compdef rv    _rv() {      typeset -A opt_args      local alternatives        alternatives=(          'args:rv options:_rv_options'          'files:file:_vim_files'      )        _alternative $alternatives && return 0        return 1  }    _rv_options() {      local arguments        arguments=(          '(-i -l -s -t --info --list --set --target)'{-h,--help}'[Print usage info.]'          '(-h -l -s -t --help --list --set --target)'{-i,--info}'[Print default vim server. As stored in $GVIM_SERVER.]'          '(-i -h -s -t --info --help --set --target)'{-l,--list}'[Print list of existing vim servers.]'          '(-i -h -l -t --info --help --list --target)'{-s,--set}'[Set default vim server for the current shell.]:vim servers:_rv_vim_servers'          '(-i -h -l -s --info --help --list --set)'{-t,--target}'[Open files in a particular vim server.]:vim servers:_rv_vim_servers'          )        _arguments -s -C $arguments && return 0        return 1  }    _rv_vim_servers() {      local -a servers      servers=( ${(f)"$(_call_program servers vim --serverlist 2>/dev/null)"} )      _wanted servers expl server compadd -M 'm:{a-z}={A-Z}' -a servers && return  }    # invoke the completion command during autoload  _rv "$@"  

Current Behavior

Currently _rv completion will is usable, but not ideal.

  • When I type rv <TAB>, I do not see the vim options. Only rv options and file paths are displayed. _vim is completing file paths for me, so hooray to that!
  • When I type rv -s <TAB>, I see the list of vim servers, but also the file paths are displayed. A file is not permitted at this point in the command, and should not appear in the autocomplete.

Expected Behavior

  • When I type rv <TAB>, I expect to see: 1) rv options, 2) vim options, 3) file path list
  • When I type rv -s <TAB>, I expect to see: 1) vim server names (as provided by _rv_vim_servers.
  • When I type rv /valid/file/path/<TAB>, I expect to only see a file path list. Since _vim already has this capability, I would prefer to rely on it.

Graphical Btrfs tool

Posted: 14 Apr 2022 08:38 PM PDT

Is there a graphical tool for creating Btrfs sub-partitions and in particular, subpartitions like "GParted" or "system-config-lvm"? I'm running Debian squeeze.

In response to the first comment, Btrfs can do thinks like RAID and sub-partitions, much like LVM. I've read that Btrfs can be seen as a replacement for LVM. LVM has a graphical tool to manage these aspects, does Btrfs have the same?

GPU usage monitoring (CUDA)

Posted: 15 Apr 2022 01:39 AM PDT

I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?

Jump to specific character in a line in VI

Posted: 14 Apr 2022 11:47 PM PDT

In VI, I know that if you do

:some_number  

and hit enter, you will jump to the line specified by "some_number". Is there an equivalent for jumping to a specific character in a single line?

Basically, I have a large csv and there is some characters that are breaking the parser; so I have to debug it.

I'm getting an error message that basically says "unexpected character on line XXX character YYY".

I know how to get to XXX but how do I get to YYY?

No comments:

Post a Comment