Saturday, May 21, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Dovecot Mail-Crypt plugin using users password from userdb

Posted: 21 May 2022 07:46 PM PDT

I've been looking into encrypting the emails on my Dovecot server to provide actual secure storage of emails (Based of users passwords, naturally). Problem is that it seems that even though Dovecot can get the decryption key from a database, it can't use a password, seemingly requiring an actual public/private key pair, instead of just using the password hash somehow to encrypt the emails.

Not sure if it's even technically possible, but for the most security, the user HAS to be the one to control it, and the admin can never have access(!). Is this possible in anyway?

Change registry host name where Docker images are installed to in linux

Posted: 21 May 2022 06:01 PM PDT

I have two servers - server A and server B. Server A acts as a Kubernetes deployment server and I am running a program on it that performs a docker pull to download an image from Server B (which is a server that acts as a cluster node). The image server A is trying to pull is

this.is.path.one/some/thing

from /usr/bin/docker

But the docker pull keeps failing with the error "Error response from daemon: received unexpected HTTP status: 504 Gateway Timeout". This is because the registry it's trying to pull from doesn't exist on server B. Server B is using an outdated registry when it should actually be using what Server A is trying to pull from

In server B, when I use the command "docker images" what is actually displayed is something in the format:

this-artifactory.com.path-two.com:8081:/folder/some/thing

As you can see, the ending of the registry is the same: "/some/thing" as the actual image exists on both registries, but the rest of it before that is different as they're located in different registries.

Is it possible to change the artifactory registry that the image actually exists on in server B to what docker is trying to pull from? The program ran on Server A is pulling from the registry that the image should actually be located in, but for some reason Server B keeps automatically downloading docker images to this-artifactory.com.path-two.com:8081:/folder/some/thing instead.

I would like to change the registry server that is currently set in Server B from this-artifactory.com.path-two.com:8081:/folder/some/thing to this.is.path.one/some/thing

I tried physically changing the registry host in server B's repositories.json file as well as changing the repository host displayed from "docker images" using the docker image tag command, but I am still getting the same error when I attempt to perform a docker pull.

Does anyone know how I can actually go about changing the registry host on Server B?

Procmail autoresponse to a single sender (or few ones)

Posted: 21 May 2022 05:35 PM PDT

I've been looking some similar examples and slight changes to them to perform that but unsuccessfully. This is the first recipe at $HOME/.procmailrc of the user "me".
Mail is received by user@domain.tld but the auto response is not sent. Centos 7, Postfix

:0  * ^FROMuser@domain.tld  * !^FROM_DAEMON  * !^FROM_MAILER  * !^X-Loop: me@me.tld  | (formail -rk \      -A "X-Loop: me@me.tld" \      -A "Precedence: junk"; \      echo "Testing";\      echo "This is an automated response";\      echo "Not sure to see your message";\      echo "So please try again tomorrow" ) | $SENDMAIL -t -oi   

How should the EFI System partition be made redundant without using hardware RAID?

Posted: 21 May 2022 05:19 PM PDT

What is BCP for making the EFI System partition redundant without using hardware RAID?

If I create 3x EFI System partitions on different devices and then backup any changes made to the primary (mounted at /boot/efi) to the backup devices (mounted at /boot/efi-[bc]):

  • Will the system still boot if the primary device fails, i.e. will it select one of the backup EFI system partitions?
  • Will the system select an EFI System partition deterministically when it boots, i.e. must changes to the primary be replicated on the backups before the next reboot?

Is there a better approach such that the system will still boot if the primary device fails?

Can a node.js application read files stored in a local server?

Posted: 21 May 2022 04:57 PM PDT

My store's ERP runs on a physical server. I access its data first connecting through a VPN tunnel and then establishing an ftp connection with FileZilla (an ftp connection manager). I was wondering if a node application could do something similar to access that same data. I am no expert in networks, so I have no idea on how to go about it. I found the node package "ftp", but how can I reach my local server? How can I set up a VPN tunnel to access it? Is that even possible?

System performance of Rasperry Pi (running k8s) is very poor - how to debug?

Posted: 21 May 2022 06:59 PM PDT

Summary

I'm running a slim homelab (Kubernetes cluster with few pods, Gitea, Drone, Docker Registry, and NFS share) on a 2-Pi4 cluster, and the system performance is poor. I've noted that the controller node's filesystem looks pretty slow - I'm not sure if that is a cause of, or caused by, the other symptoms. I'm going to reimage and reinstall the controller node on a new SD card in the hopes that that fixes it - but, in the meantime, I'm looking for other approaches for debugging this issue.

Situation

I've set up a minimal Kubernetes cluster on my own hardware, mostly following this guide with a few changes:

  • I only have two Pi4s (1 8Gb RAM, 1 4Gb), so my cluster is slightly smaller (8Gb is control plane, 4Gb is worker).
  • After finding Ubuntu Server to be a bit slow and unresponsive (and validating that impression with other Pi-thusiasts to make sure it wasn't just my perception/hardware), I used the 64-bit Raspbian OS instead.
    • Which, in turn, meant that my cmdline.txt change was slightly different - when I used the Ubuntu version from that article, the Pi did not come back up from a reboot
  • The cluster isn't (yet!) on its own private network - they're just communicating via my main home network.
  • The controller node has a hard drive connected via USB3, and shared via NFS for use by k8s pods.
  • I also installed fail2ban, Gitea, Drone, and a rudimentary Docker Container Registry (as well as the aforementioned NFS share) on the controller node - I thought it was best to host the CI/CD and components independently of the k8s cluster because they are dependencies of it (happy to get feedback on that, but I think it's tangential to this question).

Problem

The cluster is up and running, and I've been able to run some deployments on it (Kubernetes Dashboard, jellyfin, grafana, and a small nginx-based deployment of my Hugo-built blog). This (along with the aforementioned CI/CD components and NFS share) seems like it should be a pretty insignificant load for the cluster (and I confirmed this expectation with the author of that article) - I've previously run all of those (minus the Kubernetes overhead) and more on the single 4Gb Pi4 alone, with no issues. However, the system is very slow and unresponsive:

  • Simple shell commands (e.g. man ln , df, uptime) will take ~10 seconds to complete; apt-et install or pip3 install commands are much slower than usual (double-digit minutes)
  • Loads of simple pages in Gitea's UI (e.g.) can take anywhere between 10 seconds and a minute.
  • Simple builds the blog (Gitea link, or GitHub mirror if that's unavailable) take over 20 minutes.
  • Creation of a simple pod can take double-digit minutes
  • The Kubernetes Dashboard will often display a spinner icon for a pane/page for ~20 seconds before populating information.
  • When using kubectl proxy to view the dashboard, sometimes instead of a page the browser will show a JSON payload including the message error trying to reach service: dial tcp <IP> connect: connection refused. If I instead use kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8443:443, I get the following error in the terminal:
Forwarding from 127.0.0.1:8443 -> 8443  Forwarding from [::1]:8443 -> 8443  Handling connection for 8443  E0520 22:03:24.086226   47798 portforward.go:400] an error occurred forwarding 8443 -> 8443: error forwarding port 8443 to pod a8ef295e1e42c5c739f761ab517618dd1951ad0c19fb517849979edb80745763, uid : failed to execute portforward in network namespace "/var/run/netns/cni-cfc573de-3714-1f3a-59a9-96285ce328ca": read tcp4 127.0.0.1:45274->127.0.0.1:8443: read: connection reset by peer  Handling connection for 8443  Handling connection for 8443  E0520 22:03:29.884407   47798 portforward.go:385] error copying from local connection to remote stream: read tcp4 127.0.0.1:8443->127.0.0.1:54550: read: connection reset by peer  Handling connection for 8443  E0520 22:05:58.069799   47798 portforward.go:233] lost connection to pod  

What I've tried so far

System Resources

First I checked the system resources on all k8s machines. htop showed:

  • controller - CPU load <10% across all 4 cores, memory usage at ~2G/7.6G, Swap 47/100M - `Load average 11.62 10.17 7.32
  • worker - CPU load <3% across all 4 cores and memory usage at ~300M/1.81G, Swap 20/100M - Load average 0.00 0.00 0.00

Which is odd in two respects:

  • if Load average is so high (this suggests that 100% utilization is "Load average = number of cores", so Load average of 11 indicates that this 4-core Pi is at nearly 300% capacity), why is CPU usage so low?
  • Why is worker showing such low load average? In particular, I've confirmed that there is a ~50/50 split of k8s pods betwen controller and worker, and confirmed that I've set AGENTS_ENABLED=true (ref) on the Drone server.

I followed the instructions here to investigate High System Load and Low CPU Utilization:

  • w confirmed high system load
  • sar output:
$ sar -u 5  Linux 5.15.32-v8+ (rassigma)    05/21/2022      _aarch64_       (4 CPU)    02:41:57 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle  02:42:02 PM     all      2.47      0.00      1.16     96.37      0.00      0.00  02:42:07 PM     all      2.77      0.00      2.21     95.02      0.00      0.00  02:42:12 PM     all      3.97      0.00      1.01     95.02      0.00      0.00  02:42:17 PM     all      2.42      0.00      1.11     96.47      0.00      0.00  ^C  Average:        all      2.91      0.00      1.37     95.72      0.00      0.00  

So, a lot of %iowait!

  • ps -eo s,user | grep "^[RD]" | sort | uniq -c | sort -nbr showed
6 D root  1 R pi  

, so that doesn't seem like the cause here (the article lists an example with thousands of threads in D/R states)

Based on these two questions, I'll include here the output of various commands run on controller, though I don't know how to interpret them:

$ netstat -i 15  Kernel Interface table  Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg  br-5bde1  1500    15188      0      0 0         15765      0      0      0 BMRU  br-68f83  1500      121      0      0 0           241      0      0      0 BMU  cni0      1450  1546275      0      0 0       1687849      0      0      0 BMRU  docker0   1500   146703      0      0 0        160569      0      0      0 BMRU  eth0      1500  5002006      0      0 0       2325706      0      0      0 BMRU  flannel.  1450   161594      0      0 0        168478      0   4162      0 BMRU  lo       65536  6018581      0      0 0       6018581      0      0      0 LRU  veth1729  1450    41521      0      0 0         59590      0      0      0 BMRU  veth1a77  1450   410622      0      0 0        453044      0      0      0 BMRU  veth35a3  1450       82      0      0 0         20237      0      0      0 BMRU  veth3dce  1500    59212      0      0 0         61170      0      0      0 BMRU  veth401b  1500       28      0      0 0          4182      0      0      0 BMRU  veth4257  1450   108391      0      0 0        173055      0      0      0 BMRU  veth4642  1500    12629      0      0 0         16556      0      0      0 BMRU  veth6a62  1450       83      0      0 0         20285      0      0      0 BMRU  veth7c18  1450    47952      0      0 0         59756      0      0      0 BMRU  veth8a14  1450       82      0      0 0         20279      0      0      0 BMRU  vethcc5c  1450   655457      0      0 0        716329      0      0      0 BMRU  vethe535  1450       17      0      0 0           769      0      0      0 BMRU  vethf324  1450   180986      0      0 0        198679      0      0      0 BMRU  wlan0     1500        0      0      0 0             0      0      0      0 BMU    $ iostat -d -x  Linux 5.15.32-v8+ (rassigma)    05/21/2022      _aarch64_       (4 CPU)    Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util  mmcblk0          0.20     14.65     0.07  26.90 1031.31    74.40    3.33     56.68     1.64  33.04 4562.85    17.02    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   15.40  51.07  sda              0.27     28.31     0.05  15.37   25.75   104.42    0.36     26.56     0.24  39.99   64.19    72.81    0.00      0.00     0.00   0.00    0.00     0.00    0.04   90.24    0.03   0.56    $ vmstat 15  procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----   r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   8  3  48640 827280 129600 4607164    0    0    11    21   15   42  4  1 71 24  0   0  5  48640 827128 129600 4607216    0    0     1    44 2213 4267  4  1 31 64  0   0 10  48640 827660 129600 4607216    0    0     0    47 1960 3734  4  1 36 59  0   0  5  48640 824912 129600 4607624    0    0     1   121 2615 4912  6  2 15 77  0   2 12  48640 824416 129600 4607692    0    0     0   102 2145 4129  4  2 30 64  0   1  7  48640 822428 129600 4607972    0    0     3    81 1948 3564  6  2 10 83  0   0  5  48640 823312 129600 4608164    0    0     4    62 2328 4273  5  2 12 81  0   0  7  48640 824320 129600 4608220    0    0     1   143 2433 4695  5  2  9 84  0   ...  

51% utilization on the SD card (from iostat output) is probably reasonably high, but not problematically-so I would have thought?

Filesystem

Referencing this article on how to test (SD card) Filesystem performance on controller and worker (both are using SD cards from the same batch, which advertized 10 MB/s write speed):

controller - $ dd if=/dev/zero of=speedtest bs=1M count=100 conv=fdatasync  100+0 records in  100+0 records out  104857600 bytes (105 MB, 100 MiB) copied, 43.2033 s, 2.4 MB/s    worker - $ dd if=/dev/zero of=speedtest bs=1M count=100 conv=fdatasync  100+0 records in  100+0 records out  104857600 bytes (105 MB, 100 MiB) copied, 5.97128 s, 17.6 MB/s  

controller's FS write appears to be ~7 times slower than worker's. I'm not sure how to causally interpret that, though - it could be that the controller's filesystem is slow which is causing the other symptoms, or it could be that there is some other process-throughput-bottleneck which is causing both the slow filesystem and the other symptoms.

Network

My home network is behind a fairly standard OPNSense router.

Checking external network connectivity with Speedtest CLI:

controller - $ speedtest       Server: Tekify Fiber & Wireless - Fremont, CA (id = 6468)          ISP: Sonic.net, LLC      Latency:     3.53 ms   (0.15 ms jitter)     Download:   859.90 Mbps (data used: 523.3 MB )       Upload:   932.58 Mbps (data used: 955.5 MB )  Packet Loss:     0.0%  ---  worker - $ speedtest       Server: Tekify Fiber & Wireless - Fremont, CA (id = 6468)          ISP: Sonic.net, LLC      Latency:     3.29 ms   (1.84 ms jitter)     Download:   871.33 Mbps (data used: 776.6 MB )       Upload:   917.25 Mbps (data used: 630.5 MB )  Packet Loss:     0.0%  

I did plan to test intra-network speed, but given how long it took to get to this point of debugging and the strong signals that there's an issue with controller's SD card (high %iowait, slow dd write performance), I elected to move on to replacing that first before checking network.


Updates

  • After re-imaging on a fresh SD card, with absolutely nothing else installed on it other than Raspbian, the dd if=/dev/zero of=speedtest bs=1M count=100 conv=fdatasync Filesystem-speed test gives 17.2 MB/s for the "reborn" controller node. I'll install the k8s cluster and other tools, and test again.

How to reduce the time it takes a request to pass from a ALB to the actual Fargate Server?

Posted: 21 May 2022 12:47 PM PDT

I have a webhook endpoint where our service provider send a payload which I have to respond to within 2 seconds. I've been getting way too many timeout errors from the service provider, meaning I wasn't able to respond within 2 seconds.

I did some digging as to when the Fargate Server gets the payload vs when the ALB receives it. I went through some of the access logs from the ALB and found that it takes about a second or so to pass the payload from ALB to the fargate server.

Here's the timestamp at which the request arrived to the ALB - 15:19:20.01 and my server recieved it at - 15:19:21.69.

There's over a second of difference, I wanna know how to reduce it. One of the solution I thought of was that instead of registering my domain + the URI to the service provider to send webhook to, I set my IP + the URI so there's no need of forwarding done by ALB.

Let me know what you guys think.

EDIT - The solution I thought of was pretty stupid because fargate provides a new IP everytime a new task is deployed (as far as I know). Also the ALB forwards the request / payload to the ECS Target Group, just throwing this fact in as well.

The packet goes through the other router

Posted: 21 May 2022 02:43 PM PDT

I want to connect two hosts in parallel with two different models of virtual routers. These are made redundant by VRRP.

Originally, packets are sent through only one of the routers. However, when the packet is sent, it goes through two routers. This results in duplicate ping packets.

Why do the packets go through the other router?

The network configuration is shown below.

  • Host1(Ubuntu20.04)

    • IP Address:192.168.0.1
    • gateway:192.168.0.5
  • Host2(Ubuntu20.04)

    • IP Address:192.168.1.1
    • gateway:192.168.1.5
  • Physical Router(Ubuntu20.04)
    There are two virtual routers created by vagrant and Virtualbox inside.

  • Virtual Router(Vyos)

    • Use vagrant box "kun432/vyos"
    • eth1:192.168.0.2
    • eth2:192.168.1.2
  • Virtual Router(vSRX)

    • Use vagrant box "juniper/ffp-12.1X47-D15.4-packetmode"
    • ge-0/0/1:192.168.0.3
    • ge-0/0/2:192.168.1.3
  • 192.168.0.0/24 network Virtual IP Address: 192.168.0.5

  • 192.168.1.0/24 network Virtual IP Address: 192.168.1.5

  • NetworkBridge: bridge-utils

◆VRRP

Networking:Image

How to prevent 3rd party domains using our server/domains for email envelopes

Posted: 21 May 2022 11:30 AM PDT

We recently received a whole bunch of notification emails stating an email that apparently originated from our servers was blocked for being spam, but we can't find the source emails on our server, so wanted to ask if we're missing anything obvious.

Below is the notification email. mail@ourdomain.org is our domain email address:

Notification email:

A message that you sent was rejected by the local scanning code that checks incoming messages on this system. The following error was given:

This message was classified as SPAM and may not be delivered

------ This is a copy of your message, including all the headers. ------

Received: from amcham by vps62989.inmotionhosting.com with local (Exim
4.95) (envelope-from mail@ourdomain.org) id 1nrvvQ-0002CA-NB for
businesscenter@ecamcham.com; Thu, 19 May 2022 23:15:48 -0700
To: businesscenter@ecamcham.com Subject: Contact
X-PHP-Script: amchamec.com/index.php/contactanos for 104.149.136.246
X-PHP-Originating-Script: 1003:class.phpmailer.php
Date: Fri, 20 May 2022 06:15:48 +0000
From: "? Donna just viewed your profile! Click here: https://spamPornURLRemoved.com ?"
mail@ourdomain.org Message-ID:
bb5036ed71d5e3f3be887f7be3e5997f@amchamec.com MIME-Version: 1.0
Content-Type: text/html; charset=utf-8

Customize this e-mail also. You will receive it as administrator.

Nombre y Apellido:? Donna just viewed your profile! Click here: https://wondergirl22.page.link/29hQ?bvh9r ?
E-mail:mail@ourdomain.org
{CompanySize:caption}:{CompanySize:value}
{Position:caption}:{Position:value}
{ContactBy:caption}:{ContactBy:value}
{ContactWhen:caption}:{ContactWhen:value}

Some bullet points:

  • The envelope was from mail@ourdomain.org This is a genuine email address on our servers. This is NOT a mailbox but a forwarder that comes to my business email mailbox.
  • Checking Exim there is no record of the 1nrvvQ-0002CA-NB mail ID or bb5036ed71d5e3f3be887f7be3e5997f@amchamec.com mail id except the above message.
  • class.phpmailer.php does not exist on this server, but we do use PHP and clients do send mailings using PHPMailer (but not from this domain).
  • Our servers always use PTR, DKIM, SPF, DMARC , etc.

Our problem

So, there have been enough of these notification emails I'm not sure they're fake, but checking Exim Logs can't find these id's or email addresses in the logs so am not sure what's going on. I can only conclude that the email is entirely 3rd party but somehow they're "piggybacking" on our domain as the "envelope".

Question

What can we do to prevent 3rd party domains using our domains as "envelopes" for emails they send?

If the illustarted email above is a spam or fake, a) Is this likely and b) Why?

Exim Smarthost setup works in starttls but not in smtps

Posted: 21 May 2022 02:42 PM PDT

I have setup my exim4 as a local mta with smarthost delivery (debian 10 vm) following this guide: Exim on DebianWiki

If my smarthost is expecting a ssl connection (smtp over ssl) it does not work.

When a local web application sends an email to localhost:25, it remains stuck in the queue; if i try to force deliver it, this happens:

root@testbug:~# date && exim -v -M 1nrqKZ-0003Ji-WE  Fri 20 May 2022 10:33:50 AM CEST  delivering 1nrqKZ-0003Ji-WE  R: smarthost for name.surname@gmail.com  T: remote_smtp_smarthost for name.surname@gmail.com  Transport port=25 replaced by host-specific port=465  Connecting to smtps.aruba.it [62.149.128.218]:465 ... connected    =========== stuck for a few seconds ===========      SMTP(close)>>  LOG: MAIN    H=smtps.aruba.it [62.149.128.218]: Remote host closed connection in response to initial connection  Transport port=25 replaced by host-specific port=465  Connecting to smtps.aruba.it [62.149.156.218]:465 ... connected    =========== stuck for a few seconds ===========      SMTP(close)>>  LOG: MAIN    H=smtps.aruba.it [62.149.156.218]: Remote host closed connection in response to initial connection  LOG: MAIN    == name.surname@gmail.com R=smarthost T=remote_smtp_smarthost defer (-18) H=smtps.aruba.it [62.149.156.218]: Remote host closed connection in response to initial connection  

This is the log for that:

root@testbug:~# tail -3 /var/log/exim4/mainlog  2022-05-20 10:35:31 1nrqKZ-0003Ji-WE H=smtps.aruba.it [62.149.128.218]: Remote host closed connection in response to initial connection  2022-05-20 10:37:11 1nrqKZ-0003Ji-WE H=smtps.aruba.it [62.149.156.218]: Remote host closed connection in response to initial connection  2022-05-20 10:37:11 1nrqKZ-0003Ji-WE == name.surname@gmail.com R=smarthost T=remote_smtp_smarthost defer (-18) H=smtps.aruba.it [62.149.156.218]: Remote host closed connection in response to initial connection  

Please note that server accepts ssl connections:

root@testbug:~# openssl s_client -connect smtps.aruba.it:465  CONNECTED(00000003)  depth=2 C = IT, L = Milan, O = Actalis S.p.A./03358520967, CN = Actalis Authentication Root CA  [...]  No client certificate CA names sent  [...]  Verification: OK  ---  New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384  Server public key is 2048 bit  Secure Renegotiation IS supported  Compression: NONE  Expansion: NONE  No ALPN negotiated  SSL-Session:      Protocol  : TLSv1.2  [...]  ---  220 smtpdh08.ad.aruba.it Aruba Outgoing Smtp  ESMTP server ready  

If i switch to a different smarthost server smtp.mydomain.it, run by the same provider (so i use the same credentials to authenticate vs the smarthost) on port 25 with starttls, things run smoothly, emails are delivered (in starttls) as i restart exim:

2022-05-20 10:42:48 exim 4.92 daemon started: pid=4015, -q30m, listening for SMTP on [127.0.0.1]:25 [::1]:25  2022-05-20 10:42:48 Start queue run: pid=4017  2022-05-20 10:42:51 1nrqKZ-0003Ji-WE => name.surname@gmail.com R=smarthost T=remote_smtp_smarthost H=smtp.mydomain.it [62.149.128.203] X=TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256 CV=no DN="C=IT,ST=Bergamo,L=Ponte San Pietro,O=Aruba S.p.A.,CN=*.aruba.it" A=plain C="250 2.0.0 ryDgn51y1TRWPryDinATBj mail accepted for delivery"  2022-05-20 10:42:51 1nrqKZ-0003Ji-WE Completed  2022-05-20 10:42:51 End queue run: pid=4017  

You can see the email is correctly delivered in starttls:

root@testbug:~# ngrep -qt -dany port 25  interface: any  filter: ( port 25 ) and (ip || ip6)    T 2022/05/20 10:42:48.900722 62.149.128.203:25 -> MY.SRV.IP.ADDR:47932 [AP] #4    220 smtpdh13.ad.aruba.it Aruba Outgoing Smtp  ESMTP server ready..    T 2022/05/20 10:42:48.900903 MY.SRV.IP.ADDR:47932 -> 62.149.128.203:25 [AP] #5    EHLO testbug.mydomain.it..    T 2022/05/20 10:42:49.025487 62.149.128.203:25 -> MY.SRV.IP.ADDR:47932 [AP] #7    250-smtpdh13.ad.aruba.it hello [MY.SRV.IP.ADDR], pleased to meet you..250-HELP..250-AUTH LOGIN PLAIN..250-SIZE 524288000..250-ENHANCEDSTATUSCODES..250-8BITMIME..250-STARTTLS..250 OK..    T 2022/05/20 10:42:49.025702 MY.SRV.IP.ADDR:47932 -> 62.149.128.203:25 [AP] #8    STARTTLS..    T 2022/05/20 10:42:49.092110 62.149.128.203:25 -> MY.SRV.IP.ADDR:47932 [AP] #10    220 2.0.0 Ready to start TLS..    T 2022/05/20 10:42:49.111151 MY.SRV.IP.ADDR:47932 -> 62.149.128.203:25 [AP] #11    ....L...H..d.@"^.`I.....OU..x.N|Z..."...._@..:.........,.......+.....0...../.......5.....[...]  

Can anyone point me to the right direction to investigate?

Can this be a network/ports issue? Or a certificate issue (i generate my selfsigned certificate in a slight different way and actually i don't know why need one and if this certificate is anyway validated by the server)?

Thanks a lot.

EDIT: got a more verbose output for force delivery a message: https://pastebin.com/axRsQmwy

Unable to join node to the cluster [closed]

Posted: 21 May 2022 05:05 PM PDT

[preflight] Running pre-flight checks    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03    [preflight] Reading configuration from the cluster...    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'  error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to decode cluster configuration data: no kind "ClusterConfiguration" is registered for version "kubeadm.k8s.io/v1beta3" in scheme "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31"  To see the stack trace of this error execute with --v=5 or higher  

Asterisk: pri to pri connection. Green light but a lot of hdlc errors

Posted: 21 May 2022 05:43 PM PDT

I want to connect(for learning, nothing serious), two asterisk server via ISDN PRI. The first server will act as "telco" so is pri_net, the second server is the cpe, so I will use pri_cpe.

The cards are

pbx1-net: Digium TE205P pbx2-cpe: Openvox D210P

Both servers/pbx use Debian 11 and dahdi 2.11 drivers

On the pbx-net this is dahdi/system.conf

    # Span 1: WCTDM/4 "Wildcard TDM400P REV E/F Board 5" (MASTER)       fxoks=1      echocanceller=mg2,1      fxoks=2      echocanceller=mg2,2      # channel 3, WCTDM/4/2, no module.      fxsks=4      echocanceller=mg2,4            # Span 2: TE2/0/1 "T2XXP (PCI) Card 0 Span 1"       span=2,0,0,ccs,hdb3,crc4      # termtype: te      bchan=5-19,21-35      dchan=20      echocanceller=mg2,5-19,21-35            # Span 3: TE2/0/2 "T2XXP (PCI) Card 0 Span 2"       span=3,0,0,ccs,hdb3,crc4      # termtype: te      bchan=36-50,52-66      dchan=51      echocanceller=mg2,36-50,52-66            # Global data            loadzone    = it      defaultzone = it  

this is the /etc/asterisk/chan_dahdi.conf on pbx-net

    [trunkgroups]      [channels]      language=it      context=local      switchtype=euroisdn      signalling=pri_net      usecallerid=yes      hidecallerid=no      callwaiting=yes      usecallingpres=yes      callwaitingcallerid=yes      threewaycalling=yes      transfer=yes    ;se usi nt ptmp metti no      canpark=yes      cancallforward=yes      callreturn=yes      context=local      echocancel=yes      channel => 1-15,17-31      echocancelwhenbridged=yes      group=1      callgroup=1      pickupgroup=1      immediate=no      #include /etc/asterisk/dahdi-channels.conf  

This is the /etc/asterisk/dahdi-channels.conf on pbx-net

; Span 1: WCTDM/4 "Wildcard TDM400P REV E/F Board 5" (MASTER)   ;;; line="1 WCTDM/4/0"  signalling=fxo_ks  callerid="Channel 1" <4001>  mailbox=4001  group=5  context=from-dahdi  channel => 1    ;;; line="2 WCTDM/4/1"  signalling=fxo_ks  callerid="Channel 2" <4002>  mailbox=4002  group=5  context=from-dahdi  channel => 2    ;;; line="4 WCTDM/4/3"  signalling=fxs_ks  callerid=asreceived  group=0  context=from-dahdi  channel => 4    ; Span 2: TE2/0/1 "T2XXP (PCI) Card 0 Span 1"   group=0,12  context=from-dahdi  switchtype = euroisdn  signalling = pri_net  channel => 5-19,21-35    ; Span 3: TE2/0/2 "T2XXP (PCI) Card 0 Span 2"   group=0,13  context=from-dahdi  switchtype = euroisdn  signalling = pri_net  channel => 36-50,52-66  

On the pbx-cpe side, the files are identical except for /etc/dahdi/system.conf (the 1,1,0 is 1: span 1: timing sourcing from pri_net side, 0: cable is 0-133feet)

# Span 1: TE2/0/1 "T2XXP (PCI) Card 0 Span 1" (MASTER)   span=1,1,0,ccs,hdb3  # termtype: te  bchan=1-15,17-31  dchan=16  echocanceller=mg2,1-15,17-31    # Span 2: TE2/0/2 "T2XXP (PCI) Card 0 Span 2"   span=2,2,0,ccs,hdb3  # termtype: te  bchan=32-46,48-62  dchan=47  echocanceller=mg2,32-46,48-62    # Global data    loadzone        = it  defaultzone     = it  

The other two files are identical, except for pri_net which became pri_cpe.

Now the problem:

On pri_cpe bpx the cli report is OK

dahdi show status      Description                              Alarms  IRQ    bpviol CRC    Fra Codi Options  LBO      T2XXP (PCI) Card 0 Span 1                OK      0      0      0      CCS HDB3          0 db (CSU)/0-133 feet (DSX-1)  

On pri_net bpx the cli report is OK

dahdi show status  Description                              Alarms  IRQ    bpviol CRC    Fra Codi Options  LBO  Wildcard TDM400P REV E/F Board 5         OK      0      0      0      CAS Unk           0 db (CSU)/0-133 feet (DSX-1)  T2XXP (PCI) Card 0 Span 1                OK      0      0      0      CCS HDB3 CRC4     0 db (CSU)/0-133 feet (DSX-1)  T2XXP (PCI) Card 0 Span 2                OK      0      0      0      CCS HDB3 CRC4     0 db (CSU)/0-133 feet (DSX-1)  

Also pri span report ok on both sides

net_side

pri show spans  PRI span 2/0: Up, Active  

cpe_side

pri show spans  PRI span 1/0: Up, Active  

I configured extensions.conf to call via the pri on both sides

[uscita]  exten => _X.,1,Dial(dahdi/g12/${EXTEN})  exten => _X.,n,Hangup  

When I try to made a call on both consoles appear those error messages

On pri_net side [May 1 18:57:50] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2

On pri_cpe side

[May  1 18:58:07] NOTICE[1489]: chan_dahdi.c:2777 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 1  [May  1 18:58:07] NOTICE[1489]: chan_dahdi.c:2777 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 1  [May  1 18:58:07] NOTICE[1489]: chan_dahdi.c:2777 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 1  

again on pri_net side

== Primary D-Channel on span 2 down  [May  1 18:59:32] WARNING[2541]: sig_pri.c:1212 pri_find_dchan: Span 2: D-channel is down!  [May  1 18:59:33] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2  [May  1 18:59:40] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2  [May  1 18:59:42] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2  [May  1 18:59:43] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2  [May  1 18:59:43] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2  [May  1 18:59:45] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2    == Primary D-Channel on span 2 up  [May  1 18:59:58] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2  [May  1 18:59:58] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2  [May  1 18:59:59] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2      -- Registered SIP 'telefono3' at 192.168.0.2:59897    == Using SIP RTP CoS mark 5      -- Executing [511@local:1] Dial("SIP/telefono3-00000000", "dahdi/g12/511") in new stack      -- Requested transfer capability: 0x00 - SPEECH      -- Called dahdi/g12/511  [May  1 19:00:09] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2    == Primary D-Channel on span 2 down  [May  1 19:00:10] WARNING[2541]: sig_pri.c:1212 pri_find_dchan: Span 2: D-channel is down!  [May  1 19:00:11] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2    == Primary D-Channel on span 2 up      -- Span 2: Channel 0/1 got hangup, cause 18      -- Hungup 'DAHDI/i2/511-1'    == Everyone is busy/congested at this time (1:0/0/1)      -- Executing [511@local:2] Hangup("SIP/telefono3-00000000", "") in new stack    == Spawn extension (local, 511, 2) exited non-zero on 'SIP/telefono3-00000000'  [May  1 19:00:18] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Bad FCS (8) on D-channel of span 2  [May  1 19:00:21] NOTICE[2541]: chan_dahdi.c:2780 my_handle_dchan_exception: Got DAHDI event: HDLC Abort (6) on D-channel of span 2  

I try those solutions

Solution number 1: the card is broken, I bought another PRI card = FAIL  Solution number 2: change the pci slot of card pri_net side, and / or pri_cpe side = FAIL  Solution number 3: use a different dahdi version = FAIL  Solution number 4: use pridiallocalplan=unknown and pridialplan=unknown = FAIL  Solution number 5: revert pri_net pri_cpe roles = FAIL  Solution number 6: try another crossover cable = FAIL(I try 3 cables, two bought, one made by me, and all tested with network tester)  Solution number 7: reboot = FAIL  Solution number 8: remove the analog card from pci_net server = FAIL  

Anyone has some suggestion? Thanks

Software RAID Options Combining Internal SATA & External USB?

Posted: 21 May 2022 12:57 PM PDT

BACKGROUND

I currently use a hybrid server-workstation setup for my primary workstation. Basically, just a Fedora base running Linux KVM, on which I then containerize a couple Linux flavors & a Windows Install for use as my Workstations, along with the ability to quickly create containers for use in testing.

The CURRENT Physical Storage layout is as follows:

  • Intel 390A Chipset
  • iRST as a "hardware" RAID option (Offering RAID-0, 1, 10, 5)
    • RST-Controlled, Single 6-port SATA Hub
      • 2X 120GB SATA6 SSD's in RAID0
      • 2X 500GB SATA6 + 1X 1TB SATA6 HDD's in RAID5 (strange happenings with this)
      • 1X 2TB SATA6 HDD Non-RAID
  • 1TB NVMe PCIe@4x SSD (not-RST Controlled, since, for like 5 years, there's been an unfixed bug involving both the Linux Kernel and Anaconda that prevents RST controlled NVMe/PCIe drives from being enumerated)
  • USB 3.0 Hub
    • 1X 3TB 5400RPM + 1X 1TB 5400RPM External USB 3.0 HDD's
  • USB 3.1g2 Hub
    • 3X 2TB 5400RPM External USB 3.0 HDD's
      All USB drives are currently in single disk config

I have a lot of free storage space, and recently, a BIOS update caused the RAID-5 Array to "drop" one of the disks (the 1TB one) and show "degraded" (yet there was no data loss, nor need to rebuild the array, and it still showed as "RST RAID-5 Array", with what was once the third disk showing up as a single disk with a single "RAW" partition, which makes me question exactly how iRST handles a 3 disk RAID-5 array- since if it was actually a RAID-5 from the beginning, then it would have needed the drive replace and to have been rebuilt before any data could have been accessed, making me think it was actually either treating it like a RAID-4, with a dedicated parity disk, rather than striped parity, or possibly something non-standard, like a RAID-0 on the two 500GB drives, RAID-1 mirrored onto the 1TB drive [which actually would be the most optimal config, as it would allow full use of the space with maximal, at least read, performance, while still allowing for a single drive failure like in a RAID-5, although I don't think iRST is "smart" enough to do such a thing... it remains a mystery]; I think BTRFS, and maybe Linux LVM with ZFS, can do stuff like that, or one could do it with a combo of hardware and software RAID, but again, off the subject). The NVMe drive holds all the main OS files, so I decided to copy everything from the internal HDDs to the external HDDs and completely reconfigure the storage. Since iRST is really just a step above software RAID (unless used with those special Intel drives, and even then, idk), I figured using software RAID in Linux was my best option, as it offered a lot more flexibility. I considered btrfs, and it honestly is probably a better option than LVM, but I have more familiarity with the latter, so anyway...

MAIN QUESTION

So, with all that said, I was thinking about possibly trying to incorporate my external USB drives into the RAID setups. The WD My Passport drives are actually pretty good, for 5400RPM spinning drives, and the bottleneck on single read or write operations is honestly usually the USB PHY. Combining the External USB drives into RAID arrays would likely not offer much of a speed improvement, but rather just bottleneck the USB controllers. But that got me thinking... What if I used the external USB drives in conjunction with the internal drives, in an nonstandard configuration, like either RAID-3 or RAID-4? Basically, using the internal drives as the Data drives, while using the slower external USB drives as the parity drives?

My thinking is that, A) Parity operations are really only relevant on write operations, so reads would not involve the external drives. B) Parity blocks (or bytes) are obviously also smaller than the data itself, so perhaps the write performance would even out as well.

Another option might be to do a RAID-01, with striping across the internal drives, and mirroring onto the external drives (although, in such a case, it would probably make more sense just to do frequent scheduled backups to the external storage.

Does anyone have any experience doing such a nonstandard configuration? I am particularly interested in the RAID-3/4 idea, more as a proof of concept and test of performance than for pure practicality, although I think there could be some practical use if it proved to work. Thanks!

DNS Server Search order in Windows 10 and VPNs

Posted: 21 May 2022 04:50 PM PDT

DNS queries to hosts accessible through a VPN fail. How to fix?

On a Windows 10 host, DNS queries for hosts known only to the VPN-accessible DNS Server fail to resolve. The DNS queries are sent to my local DNS Server (192.168.1.1) which returns no DNS Answers. DNS queries are not sent to the VPN-accessible DNS Server (10.0.1.1).

Powershell Get-DnsClientServerAddress shows:

PS> Get-DnsClientServerAddress  InterfaceAlias               Interface Address ServerAddresses                               Index     Family  --------------               --------- ------- ---------------  Ethernet                            10 IPv4    {192.168.1.1}  Ethernet                            10 IPv6    {}  VPN                                 20 IPv4    {10.0.1.1}  VPN                                 20 IPv6    {}  

Pinging the VPN DNS Server succeeds (ping 10.0.1.1).
Resolving a DNS name of google.com succeeds (Resolve-DnsName google.com).
However, resolving a DNS name of VPN-accessible host server.corp.com fails.

PS> Resolve-DNSName server.corp.com  PS>  

How can I force DNS queries to prefer the VPN-accessible DNS Server at 10.0.1.1?

Send As not Working sending from Outlook Client

Posted: 21 May 2022 03:03 PM PDT

Hey i set up a DL in office 365 and added members. I can send emails on behalf of the distribution group and it sends from office 365 but when i go to my outlook client i get an error that i cannot send on behalf of the user. This is the only feature not working.

Anyone have any ideas? I am in the group to send on behalf of as well as other users. Again we can send with no error on 0365. Cannot in microsoft 365 client app on desktop.

All messages are received that are sent when not on behalf of

0x80070005-0x000004dc-0x00000524 error code

How to get the security-group for an ECS cluster

Posted: 21 May 2022 04:07 PM PDT

When creating an EC2-mode ECS cluster, you must assign/create a security group:

Create Screen

However, there appears to be no way to retrieve the ARN/name of the security group afterward.

UI:

Screenshot

CLI:

$ aws ecs describe-clusters --clusters extraction  {      "clusters": [          {              "clusterArn": "arn:aws:ecs:us-east-1:326764833890:cluster/extraction",              "clusterName": "extraction",              "status": "ACTIVE",              "registeredContainerInstancesCount": 0,              "runningTasksCount": 0,              "pendingTasksCount": 0,              "activeServicesCount": 0,              "statistics": [],              "tags": [],              "settings": [                  {                      "name": "containerInsights",                      "value": "disabled"                  }              ],              "capacityProviders": [                  "FARGATE_SPOT",                  "FARGATE"              ],              "defaultCapacityProviderStrategy": []          }      ],      "failures": []  }  

Assume there are no services that can be expected:

No Services

As this is a cluster semantic, I would assume there is a cluster solution to inspecting this information. That said, I'm close to assuming that this information is not actually [exposed] in ECS and might only, actually, be found by looking at the actual instances in EC2.

Note that it's interesting that the "attributes" under the "ECS Instances" tab shows a machine class but, yet, not the security group:

ECS Instances

How to install Mysql 5.6 on RHEL8 from Mysql repos?

Posted: 21 May 2022 12:03 PM PDT

I'm new to dnf, so this is probably obvious to someone.

I did check versionlock and checked for any exclude lines in any of the /etc/yum.repos.d files. But, when I have any other repos enabled, I cannot find mysql server to install. I can try to install it with all other repos disabled, but then it cannot find all the dependencies.

[root@ip-10-9-10-242 yum.repos.d]# dnf clean all  42 files removed  [root@ip-10-9-10-242 yum.repos.d]# dnf --disablerepo "*"  --enablerepo "mysql56-community" list available  Last metadata expiration check: 0:07:40 ago on Tue 30 Jul 2019 12:38:45 UTC.  Available Packages  mysql-community-bench.x86_64                                                                                        5.6.45-2.el7                                                                          mysql56-community  ...  mysql-community-server.x86_64                                                                                       5.6.45-2.el7                                                                          mysql56-community  mysql-community-test.x86_64                                                                                         5.6.45-2.el7                                                                          mysql56-community  [root@ip-10-9-10-242 yum.repos.d]# dnf --enablerepo "mysql56-community" search mysql-community-server.x86_64  Last metadata expiration check: 0:07:59 ago on Tue 30 Jul 2019 12:38:48 UTC.  No matches found.  [root@ip-10-9-10-242 yum.repos.d]# dnf search mysql-community  Last metadata expiration check: 0:10:57 ago on Tue 30 Jul 2019 12:38:48 UTC.  ============================================================================================== Name Matched: mysql-community ==============================================================================================  mysql-community-bench.x86_64 : MySQL benchmark suite  mysql-community-release.noarch : MySQL repository configuration for yum  mysql-community-release.noarch : MySQL repository configuration for yum  mysql-community-embedded.i686 : MySQL embedded library  mysql-community-embedded.x86_64 : MySQL embedded library  mysql-community-embedded-devel.i686 : Development header files and libraries for MySQL as an embeddable library  mysql-community-embedded-devel.x86_64 : Development header files and libraries for MySQL as an embeddable library  

So, you can see some of the stuff in mysql56 repo is found (like bench) but the server isn't. I'd also previously done :

yum-config-manager --disable mysql80-community  yum-config-manager --enable mysql56-community  

And can see :

[root@ip-10-9-10-242 yum.repos.d]# dnf repolist  Last metadata expiration check: 0:00:04 ago on Tue 30 Jul 2019 13:03:47 UTC.  repo id                         repo name                                status  mysql-connectors-community      MySQL Connectors Community                 118  mysql-tools-community           MySQL Tools Community                       95  mysql56-community               MySQL 5.6 Community Server                 169  rhui-client-config-server-8     Red Hat Update Infrastructure 3 Client C     3  rhui-rhel-8-appstream-rhui-rpms Red Hat Enterprise Linux 8 for x86_64 -  5,472  rhui-rhel-8-baseos-rhui-rpms    Red Hat Enterprise Linux 8 for x86_64 -  2,029  

For anyone still confused, what can't be found, can't be installed.

[root@ip-10-9-10-242 ~]# dnf install mysql-community-server  Last metadata expiration check: 0:00:12 ago on Thu 01 Aug 2019 09:14:02 UTC.  No match for argument: mysql-community-server  Error: Unable to find a match  

OK, so a bit of "-v" action and it says the packages are excluded. BUT I can disable all excludes and it still says they're excluded.

[root@ip-10-9-10-242 dnf]# dnf repoquery --repo mysql56-community --available mysql-community-server-0:5.6.45-2.el7.x86_64  Last metadata expiration check: 0:13:17 ago on Thu 01 Aug 2019 17:12:25 UTC.  mysql-community-server-0:5.6.45-2.el7.x86_64      [root@ip-10-9-10-242 dnf]# dnf --enablerepo=mysql56-community  --disableexcludepkgs all --disableexcludes all  install mysql-community-bench  Last metadata expiration check: 0:14:46 ago on Thu 01 Aug 2019 17:13:30 UTC.  Error:    Problem: package mysql-community-bench-5.6.45-2.el7.x86_64 requires mysql-community-server(x86-64) >= 5.6.10, but none of the providers can be installed    - cannot install the best candidate for the job    - package mysql-community-server-5.6.15-4.el7.x86_64 is excluded    - package mysql-community-server-5.6.16-1.el7.x86_64 is excluded  .....    - package mysql-community-server-5.6.44-2.el7.x86_64 is excluded    - package mysql-community-server-5.6.45-2.el7.x86_64 is excluded        [root@ip-10-9-10-242 dnf]# dnf --disablerepo "*" --enablerepo=mysql56-community  --disableexcludepkgs all --disableexcludes all  install mysql-community-bench    Last metadata expiration check: 0:18:12 ago on Thu 01 Aug 2019 17:12:25 UTC.    Error:      Problem: cannot install the best candidate for the job      - nothing provides /usr/bin/perl needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(Getopt::Long) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(Data::Dumper) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(POSIX) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(Cwd) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(DBI) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(Benchmark) needed by mysql-community-bench-5.6.45-2.el7.x86_64      - nothing provides perl(sigtrap) needed by mysql-community-bench-5.6.45-2.el7.x86_64    (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)  

Multipart ranges in Nginx reverse proxy

Posted: 21 May 2022 02:01 PM PDT

I am trying to setup Nginx as a reverse proxy. The upstream server is serving some media files.

Because of the large amount of requests for these files and also since these files are not going to change for at least a couple of weeks, I'm caching the upstream responses in Nginx and serving all subsequent requests from the cache.

proxy_cache_path /home/bandc/content levels=1:2 keys_zone=content_cache:10m max_size=10g inactive=15d use_temp_path=off;    upstream mycdn {      server myserver.dev;  }    location / {       proxy_cache content_cache;      proxy_pass http://mycdn;      proxy_cache_methods GET HEAD;      proxy_cache_valid 200 302 7d;      proxy_cache_valid 404    10m;      add_header x-cache $upstream_cache_status;      ....  }  

The clients can send Range requests for large media files. The upstream server does support range requests.

The problem is if I send a request with multiple byte ranges before sending any GET or single Range request (i.e the response hasn't been cached before this multiple byte ranges request), Nginx delivers the whole file with 200 OK instead of the requested ranges with 206 Partial Content. But once the content has been cached, all multipart range requests work flowlessly.

I looked around a bit and found this blog post:

How Does NGINX Handle Byte Range Requests? If the file is up‑to‑date in the cache, then NGINX honors a byte range request and serves only the specified bytes of the item to the client. If the file is not cached, or if it's stale, NGINX downloads the entire file from the origin server. If the request is for a single byte range, NGINX sends that range to the client as soon as it is encountered in the download stream. If the request specifies multiple byte ranges within the same file, NGINX delivers the entire file to the client when the download completes.

Is there any way to ensure that if the file is not cached yet, multipart range should be served from upstream alone (without caching) and eventually from local cache after Nginx caches it when GET or a Range request with a single byte range is performed?

Add multiple header field matches to Exchange rule

Posted: 21 May 2022 05:07 PM PDT

I can't seem to find it so perhaps there isn't a way but does anyone know how (or if) to add multiple header field matches to an Exchange Online rule?

I'm talking about when you create a rule and select "A message header matches these text patterns". Is there any way to add the predicate multiple times? What if I wanted to match an email based on two or three different header fields. Once you've selected that option you cannot add it again and it only seems to support one value for the header field name.

enter image description here

Proxmox 5.2 Gemini Lake and IGD (graphics) pass through for Ubuntu 18

Posted: 21 May 2022 02:01 PM PDT

I'm trying to set up a fresh install of Proxmox 5.2 on Gemini Lake and I would like to configure a VM with IGD (graphics) passthrough for Ubuntu 18

Computer-based on ASRock J4105-ITX asrock.com/mb/Intel/J4105-ITX/

A standard install is working properly and now I would like to use HDMI output for a VM with Ubuntu 18.

I have read all this information:

My setup is like this:

  1. Fresh install Proxmox 5.2
  2. Grub:

    vim /etc/default/grub  

    Change the

    GRUB_CMDLINE_LINUX_DEFAULT  

    line to

    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"`  

    Save and quit

    update-grub  
  3. Blacklist module:

    vim /etc/modprobe.d/pve-blacklist.conf  

    Add these lines:

    blacklist snd_hda_intel  blacklist snd_hda_codec_hdmi  blacklist i915  

    Save and quit

  4. VFIO:

    vim /etc/modules  

    Add these lines:

    vfio  vfio_iommu_type1  vfio_pci  vfio_virqfd  save and quit  
  5. Vga adapter:

    lspci -n -s 00:02  lspci command display 00:02.0 0300: 8086:3185 (rev 03)  vim /etc/modprobe.d/vfio.conf  

    Add this line:

    options vfio-pci ids=8086:3185  

    Save and quit

  6. update-initramfs -u

  7. VM:

    Create a VM (id = 100) with an Ubuntu 18 iso as the primary boot

    Change the setup for the VM:

    vim /etc/pve/qemu-server/100.conf  

    Add these lines:

    machine: pc-i440fx-2.2  args: -device vfio-pci,host=00:02.0,addr=0x02  vga: none  

    Save and quit

  8. Reboot the server

  9. Start VM 100

Video output is initialised (clear screen) just after the VM 100 is started but the screen remains black. Start task log is:

no efidisk configured! Using temporary efivars disk.  kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled  TASK OK  

I try to install Ubuntu before change config, but it doesn't help.

What should I do now?

Varnish maximum "set beresp.ttl" value

Posted: 21 May 2022 07:01 PM PDT

I'm trying to set 100 days = 144000m ttl for varnish cache for a certain page but I observed Varnish is purging cached pages.

So question what is the maximum ttl that I can set

if ((req.url ~ "\.(html)$") && (beresp.ttl > 0s)) {     unset beresp.http.cache-control;     unset beresp.http.expires;     unset beresp.http.cookie;     set beresp.do_gzip = true;     set beresp.http.Cache-Control = "public, max-age=14411, s-maxage=14411";     remove beresp.http.Pragma;     set beresp.ttl = 144000m;     return(deliver);  

zimbra export messages in tar.gz by ID

Posted: 21 May 2022 06:04 PM PDT

I need delete old messages from zimbra account.

by command:

zmmailbox -z -m mail@domain.com s -t message -l 999 "before:1/1/14" |awk '{ if (NR!=1) {print}}'| grep mess | awk '{ print $2 "," }' | tr -d '\n'  

I can recieve ID messages and I can delete message by ID

zmmailbox -z -m mail@domain.com deleteMessage $ID  

But between these two command, I would like to save the message in. tar.gz

Some Exchange server users can't send email from mobile devices

Posted: 21 May 2022 07:01 PM PDT

I am facing a strange problem. I have configured some users to use their mail box from their mobile devices and outlook using POP3 and SMTP connection. The problem is that they can receive emails but they are not able to send emails. Whenever they send an email it goes to Outbox with a message status "Failed". This problem occurs with only some users others are able to send and receive perfectly without any error.

We have domain, Windows Server 2008 and Exchange Server 2010.

I have tried by changing the ports an security layers but no success.

Please help guys...

rsyslogd template stopped working

Posted: 21 May 2022 03:03 PM PDT

Really perplexed at what happened. I've had rsyslogd running on a Centos 6.5 server for a while now logging for remote hosts to a special folder /data/rsyslog. Yesterday I setup our firewall to start logging and it was working fine except the logs were large for that firewall, so I decided to setup a logrotate job for it to rotate. This morning all the logs rotated, but it was no longer logging to any files in my rsyslog folder for any hosts. At first I thought something was wrong with creating a new file, but what I found is all the remote logs now going to the standard /var/log/message file. This is my rsyslog.conf file:

[root@backup1 etc]# cat rsyslog.conf  # rsyslog v5 configuration file    # For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html  # If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html    #### MODULES ####    $ModLoad imuxsock # provides support for local system logging (e.g. via logger command)  $ModLoad imklog   # provides kernel logging support (previously done by rklogd)  #$ModLoad immark  # provides --MARK-- message capability    # Provides UDP syslog reception  $ModLoad imudp  $UDPServerRun 514    # Provides TCP syslog reception  $ModLoad imtcp  $InputTCPServerRun 514      #### GLOBAL DIRECTIVES ####    # Use default timestamp format  $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat    # File syncing capability is disabled by default. This feature is usually not required,  # not useful and an extreme performance hit  #$ActionFileEnableSync on    # Include all config files in /etc/rsyslog.d/  $IncludeConfig /etc/rsyslog.d/*.conf      #### RULES ####    # Log all kernel messages to the console.  # Logging much else clutters up the screen.  #kern.*                                                 /dev/console    # Log anything (except mail) of level info or higher.  # Don't log private authentication messages!  *.info;mail.none;authpriv.none;cron.none        /var/log/messages    # The authpriv file has restricted access.  authpriv.*      /var/log/secure    # Log all the mail messages in one place.  mail.*                                                  -/var/log/maillog    # Log cron stuff  cron.*                                                  /var/log/cron    # Everybody gets emergency messages  *.emerg                                                 :omusrmsg:*    # Save news errors of level crit and higher in a special file.  uucp,news.crit  /var/log/spooler    # Save boot messages also to boot.log  local7.*                                                /var/log/boot.log    $template Secure_log,"/data/rsyslog/%fromhost%.secure"  $template Message_log,"/data/rsyslog/%fromhost%.message"  

That is the way the file was, but I found that I don't have an action, so I added the following line to the end to test:

*.* ?Message_log  

And now all goes into their hostname files as expected into that folder, but its also logging to /var/log folder as well. I guess the line above is doing that with /var/log/messages. I have upgraded to rsyslogd version 7 in the process of trying to get this working. Not sure what happened, but just trying to get it working from scratch now and can't seem to track down the proper configuration to log remote hosts to only that special folder. Can someone help?

Cpu overuse replicating a Gluster Volume

Posted: 21 May 2022 01:05 PM PDT

I've this scenario:

srv01 srv02 srv03

there is a gluster volume "vol1" running on srv03, and all the servers can use for i/o. vol1 contains a lot of mixed side images, ranging from few kbs to 3-4Mb, The total amount is about 1.5TB.

Gluster version is 3.6.2

It's not a silver bullet, need some tuning, but works pretty well.

Now I've to replicate srv03's brick to the other servers.

The problem is that srv03's cpu skyrockets to 100% and cannot serve normal requests. Net traffic is low.

Options are:

cluster.data-self-heal-algorithm: full

cluster.self-heal-daemon: off

performance.cache-size: 1gb

I've to keep the service running while the replication is running, Your suggestions are welcome

Is it possible to ignore a missing PAM module?

Posted: 21 May 2022 01:05 PM PDT

I am configuring yubico-pam to enable passwordless sudo access using challenge-response from a Yubikey. The following works:

# /etc/pam.d/sudo  auth       sufficient     pam_yubico.so mode=challenge-response  auth       required       pam_opendirectory.so  account    required       pam_permit.so  password   required       pam_deny.so  session    required       pam_permit.so  

unless the pam_yubico.so module is missing, uninstalled, or corrupted, in which case one is told:

$ sudo su -
sudo: unable to initialize PAM: No such file or directory

Is it possible to tell PAM to ignore a module that is missing, rather than simply returning immediately and prevent PAM from continuing to evaluate the stack?

Synology NAS - rsync messing up versioning / deduplication

Posted: 21 May 2022 12:03 PM PDT

Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?

Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.

Detailed info:

At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.

All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.

Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.

Dante blocking some localhost connections

Posted: 21 May 2022 06:04 PM PDT

I connect through a tunnel to Dante. It works but still blocks two of the apps that I need to work through the SOCKS proxy

Aug 28 14:20:24 (1377699624) danted[3519]: block(1): tcp/connect [: 127.0.0.1.51519 -> 127.0.0.1.30000  Aug 28 14:20:33 (1377699633) danted[3519]: block(1): tcp/connect [: 127.0.0.1.51527 -> 127.0.0.1.6112  

That is what it says on the logs. My config is :

#Where are we going to log all those useful error messages?  logoutput: /var/log/dante.log    #What ip and port should Dante listen on,  # since I am only going to be using this via SSH   #I only want to allow connections over the loopback  internal: 127.0.0.1 port = 1080    #Bind to the eth0 interface  external: eth0    #Since I am only accepting connections over the loopback,   # the only people that COULD connect   # would already be authenticated,   # no need to have dante authenticate also  method: username none    #Which unprivileged user will Dante impersonate if need-be?  user.notprivileged: nobody    # Who can access this proxy?  # Accept only connections from the loopback, all ports  client pass {    from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0  }    #Block all other connection attempts  client block {    from: 0.0.0.0/0 to: 0.0.0.0/0    log: connect error  }    # Once connected, where can they go?  block {    from: 0.0.0.0/0 to: 127.0.0.0/8    log: connect error  }    #Pass from the internal IP to anywhere  pass {    from: 192.168.0.0/16 to: 0.0.0.0/0    protocol: tcp udp  }    #Pass from the loopback going anywhere  pass {    from: 127.0.0.0/8 to: 0.0.0.0/0    protocol: tcp udp  }    # Block everything else  block {    from: 0.0.0.0/0 to: 0.0.0.0/0    log: connect error  }  

Do you know what happens here ? I'm pretty confused

creating site collections using SharePoint management shell 2013

Posted: 21 May 2022 05:07 PM PDT

I am creating Site Collections for Testing SharePoint. I am using Host Named Site Collection to do that instead of the path based site collection addressing.

For that

  1. Got a reference to the HNSC web application which I created before using web application management.

  2. created a non-templated site at the root of the web applications for workflow to connect.

    New-SPSite -Name "Root HNSC Site Collection" -Url "http://vspserver -HostHeaderWebApplication $hnscWebApp -OwnerAlias "vspserver \Administrator"  
  3. but when I try to create a Team site by using the following scrpit:

    New-SPSite -Name "VSPServer Intranet" -Url "http://intranet.vspserver" –HostHeaderWebApplication $hnscWebApp -Template "STS#0" -OwnerAlias "vspserver\Administrator"  

I get this error message:

New-SPSite : Cannot find an SPWebApplication object that contains the  following Name, Id, or Url: http://intranet.vspserver.test-lab.local.  At line:1 char:1  + New-SPSite -Name "SPServer Intranet" -Url  "http://intranet.vspserver.test-lab.lo ...  +         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ~~~      + CategoryInfo          : InvalidData: (Microsoft.Share...SPCmdletNewSite:     SPCmdletNewSite) [New-SPSite], SPCmdletPipeBindException      + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSite  

Any idea would be appreciated.

Problems with starting Uniform Server

Posted: 21 May 2022 04:07 PM PDT

everyone. I am having a serious problem with accessing the Uniform Server that I had installed sometime ago to build a web database. The last time I had tried to start the server it, I was successful. I just tried to start the server again just a little while ago and the browser said the link appears to be broken. Can somebody help me fix this please? Any help would be much appreciated.

No comments:

Post a Comment