Sunday, April 4, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


KVM vps scsi disk

Posted: 04 Apr 2021 09:44 PM PDT

CentOS 7 ISO installer is not able to detect disk when using scsi disk driver on a KVM server with Ubuntu 18+ on it.

    <disk type='block' device='disk'>      <driver name='qemu'  cache='none' />      <source dev='/dev/vg/vsv1007-dzavj6gtbxrk0zqy-3s4mvzdfizekt0vs'/>        <target dev='hda' bus='scsi' />    </disk>  <disk type='file' device='cdrom'>    <source file='/var/virtualizor/iso/CentOS-7-x86_64-Minimal-2009.iso'/>    <target dev='hdb' bus='ide'/>    <readonly/>  </disk>  

Does it requires special drivers to be loaded for vps ?

enter image description here

What causes this: Proxy could not open connection to hostname: Bad Request kex_exchange_identification: Connection closed by remote host

Posted: 04 Apr 2021 09:22 PM PDT

Hello I am trying to access a remote server through SSH and this is happening to me,

OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020 debug1: Reading configuration data /home/paulo/.ssh/config debug1: /home/paulo/.ssh/config line 1: Applying options for hpc.cea.cu debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for * debug1: Executing proxy command: exec corkscrew 192.168.49.1 8282 hostname 3310 debug1: identity file /home/paulo/.ssh/authorized_keys/id_rsa type -1 debug1: identity file /home/paulo/.ssh/authorized_keys/id_rsa-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.2 Proxy could not open connection to hostname: Bad Request kex_exchange_identification: Connection closed by remote host

This is the configuration of the ~ / .ssh / config

Host hostname ProxyCommand corkscrew 192.168.49.1 8282 %h %p HostName hostname Port 3310 IdentityFile /home/paulo/.ssh/authorized_keys/id_rsa

What solution do you recommend?

Can a GCP organisation change its domain?

Posted: 04 Apr 2021 09:09 PM PDT

I have a client that changed its business name, and with it the domain. For example: oldcompany.com into newcompany.com. Legally, they stayed the same entity, owning both domains.

Now they have migrated most of their GCP resources to match the new name, however the GCP organisation itself remained on oldcompany.com.

As time passes, they want to turn off their oldcompany.com domain. I could not answer their question, on how to change the domain for a GCP organisation.

Nginx Configuration for support of domain name without "www"

Posted: 04 Apr 2021 09:47 PM PDT

I have deployed a LEMP stack to Linode and purchased a Wildcard SSL Certificate.

Prior to installing the SSL Certificate, I could type in example.com.au OR www.example.com.au and it would bring up the same HTTP website.

After installing the SSL Certificate, I only successfully reach the website when I type in www.example.com.au

I've tried setting the server_name inside of the server blocks in the nginx configuration, however, it doesn't seem to have any effect on it.

Here is my nginx configuration file.

Note - I'm deploying Wordpress using docker

upstream php {      #server unix:/tmp/php-cgi.socket;      server php:9000;  }    server {      listen 80;      listen [::]:80;      server_name *.example.com.au;      return 301 https://$host$request_uri;  }    server {      listen 80;      listen [::]:80;      server_name example.com.au; # <-- this doesnt help at all      return 301 https://$host$request_uri;  }    server {      listen 443 ssl;      server_name *.example.com.au;        ssl_certificate /etc/nginx/certs/bundle.crt;      ssl_certificate_key /etc/nginx/certs/example.com.au.key;        root /var/www/html;        index index.php;        location = /favicon.ico {          log_not_found off;          access_log off;      }        location = /robots.txt {          allow all;          log_not_found off;          access_log off;      }        location / {          try_files $uri $uri/ /index.php?$args;      }        location ~ \.php$ {          include fastcgi.conf;          fastcgi_intercept_errors on;          fastcgi_pass php;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;      }        location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {          expires max;          log_not_found off;      }  }  

Does anyone have a solution? When I look at the request details in the network tab, I don't see a "Host" header being sent when typing example.com.au in the URL bar of the browser... but it is present when I type in www.example.com.au

Thanks

WSL & VcXsrv mouse not functional

Posted: 04 Apr 2021 07:04 PM PDT

I have WSL set up on a WINDOWS 10 PRO machine which I generally use for development. And I have gotten X apps to display properly within the windows WM and everything seemed fine; however, no matter what I try, it appears that I cannot use the mouse buttons on any of the windows. I can use them on the root to pull up menus; but when I try to close or expand or change individual windows, the mouse keys do not function at all. I have tried both in Multiple Windows, One Large Window, and Fullscreen.

Has anyone experienced this behavior and found a solution for this odd behavior?

Using Ubuntu-20.04

Windows 10 Pro

WSL 2

Thanks, T

Combining several ACLs to make an alias

Posted: 04 Apr 2021 05:24 PM PDT

I was wondering if it is possible to combine two ACLs to reduce repeating myself in my HAProxy config. Example:

acl prod hdr_beg(host) -i example.com  acl beta hdr_beg(host) -i beta.example.com  acl url_app path_beg /app/  

Instead of having to do an if prod url_app or if beta url_app I would like to do if prod_app or if beta_app where prod_app would alias to prod AND url_app with the same respective treatment for beta_app

Is this possible?

Allow a user to restart a service

Posted: 04 Apr 2021 04:41 PM PDT

I am trying to restart a service without being root. Here is the code snippet where the command being used

template {    source      = "{{vault_template_dir}}/agent.crt.tpl"    destination = "{{vault_tls_dir}}/agent.crt"    command     = "/usr/bin/systemctl restart vault.service"  }  

after researching and reading similar issues, I've tried to give a limited sudo command for the group I an using (user=vault, group=vault) by editing the /etc/sudoers file and adding the following line:

%vault ALL=(root) NOPASSWD: /usr/bin/systemctl restart vault.service  

However, I am still getting an error when I try and run the command. error log file:

Apr  4 23:27:41 xxxxxx vault[133973]: Failed to restart vault.service: Interactive authentication required.  

vault.service

[Unit]  After=network.service hostname.service consul-init.service consul.service  Description="Hashicorp Vault - A tool for managing secrets"  Documentation=https://www.vaultproject.io/docs/  StartLimitInterval=200  StartLimitBurst=5    [Service]  User=vault  Group=vault  ExecStart=/usr/bin/vault server -config="{{vault_server_config_file}}"  ExecReload=/usr/bin/kill -HUP $MAINIP  CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK  LimitNOFILE=65536  LimitMEMLOCK=infinity  Restart=always  RestartSec=30    [Install]  WantedBy=multi-user.target  

can someone please help me with this issue?

Linux VM not accessible on local network

Posted: 04 Apr 2021 03:57 PM PDT

I have a Linux VM in Hyper-V that's only accessible from outside the local network. SSH and HTTP and what not works just fine when port forwarded, but on the local network, it rejects everything, including pings, even from the host machine (Win 10 Pro). I can monitor the network traffic to and from the VM in Wireshark, but that's about it.

Things that might be worth noting:

  • I'm using a virtual switch, so the VM has a separate local IP address from the host
  • The host also seems to reject pings, but responds to HTTP
  • The VM can't find any local devices, including the router

Google load balancer HTTPS between wordpress.com and google storage bucket

Posted: 04 Apr 2021 03:40 PM PDT

My question is about how to best route a particular url to a static page while routing everything else to a wordpress.com site

I have a domain on google domains, currently the A record for the bare domain points to IPs for wordpress.com site

I'd like to instead point to a load balancer/reverse proxy so that a particular URL (on the bare domain, not a subdomain) gets pointed to a static html page (in google storage bucket or wherever is best), while everything else gets routed to the wordpress.com IPs.

How do I best do this in google cloud?

Thanks!

What decides node's hostname when adding new node with microk8s?

Posted: 04 Apr 2021 05:36 PM PDT

I am building a cluster of machines all running the same setup:

  • Ubuntu Server 20.04.2
  • during installation I select a unique short hostname
  • when OS is installed, I add microk8s 1.20/stable via snap and add permissions following this tutorial

I decided to turn off HA by running microk8s disable ha-cluster after installation.

I run microk8s add-node on master and first two machines connect successfully, creating a cluster with three nodes, one of them being master. The problem occurs with the 4th machine. Although it connects just fine, kubelet doesn't use the "pretty" hostname as defined in /etc/hostname but my machine's internal IP. Everything works fine, but this results in an inconsistent and ugly node list.

Running microk8s.kubectl edit node on the master, I cherry pick the problematic machine on ip 192.168.0.134 (hostname zebra) and one of the machines which connected with its hostname as intended (rhombus):

- apiVersion: v1    kind: Node    metadata:      annotations:        node.alpha.kubernetes.io/ttl: "0"        volumes.kubernetes.io/controller-managed-attach-detach: "true"      creationTimestamp: "2021-04-04T18:08:15Z"      labels:        beta.kubernetes.io/arch: amd64        beta.kubernetes.io/os: linux        kubernetes.io/arch: amd64        kubernetes.io/hostname: 192.168.0.134        kubernetes.io/os: linux        microk8s.io/cluster: "true"      name: 192.168.0.134      resourceVersion: "27486"      selfLink: /api/v1/nodes/192.168.0.134      uid: 09c01d87-1ae4-452f-8908-6dcb85a5999a    spec: {}    status:      addresses:      - address: 192.168.0.134        type: InternalIP      - address: 192.168.0.134        type: Hostname      ...    - apiVersion: v1    kind: Node    metadata:      annotations:        node.alpha.kubernetes.io/ttl: "0"        volumes.kubernetes.io/controller-managed-attach-detach: "true"      creationTimestamp: "2021-04-04T13:59:21Z"      labels:        beta.kubernetes.io/arch: amd64        beta.kubernetes.io/os: linux        kubernetes.io/arch: amd64        kubernetes.io/hostname: rhombus        kubernetes.io/os: linux        microk8s.io/cluster: "true"      name: rhombus      resourceVersion: "27244"      selfLink: /api/v1/nodes/rhombus      uid: f125573a-0efb-444c-849b-f0521fe3b813    spec: {}    status:      addresses:      - address: 192.168.0.105        type: InternalIP      - address: rhombus        type: Hostname  

I find that the --hostname-override argument is causing this headache:

$ sudo grep -rlw "192.168.0.134" /var/snap/microk8s/2094/args  /var/snap/microk8s/2094/args/kube-proxy  /var/snap/microk8s/2094/args/kubelet  /var/snap/microk8s/2094/args/kubelet.backup  
$ cat /var/snap/microk8s/2094/kubelet    ...    --cluster-domain=cluster.local  --cluster-dns=10.152.183.10  --hostname-override 192.168.0.134  

If I compare the file against the same one on machines without this problem, the last line is extra. Same goes for /var/snap/microk8s/current/..., I don't know what the difference between those is.

If I try to remove that line or change the IP to zebra, the settings is ignored and written over (somehow). To do this was suggested in an answer to a related question here. Other answers suggest reset, I use microk8s reset to no difference. To verify each step along the way, I run the same commands on one of the machines which connect with their "pretty" hostname. In the end, it always retained the "pretty" hostname.

What should I change before I connect the node in other to display the correct name? Why would the same installation steps on different machines result in a different node name?

EDIT: I reinstalled OS on the machine and the issue remains.

adding modules to installed asterisk - alsa.so not found

Posted: 04 Apr 2021 07:53 PM PDT

I have set

load= chan_alsa.so  

and i got this error

ERROR[77064] loader.c: Error loading module 'alsa.so': /usr/lib/asterisk/modules/alsa.so: cannot open shared object file: No such file or directory  

is there a missing module or alsa is global ?

IPTABLES - block IPs that do not complete handshake/visit webpage

Posted: 04 Apr 2021 03:06 PM PDT

i am trying to figure out how to achieve something am not sure is achievable and need help. I did my research but couldnt find credible information. Hope this question is not duplicate.

SET UP: I am using iptables as my firewall to block malicious ip activity. currently i am manually writing the entries in a file and then execute with iptables-restore < /etc/iptables/rules. Within those rules i have one that logs every inbound connection -A INPUT -m state --state NEW -j LOGALL. I have also set up apache to log ip that connect to the webpages. (different logs for each page, and different log file for iptables)

PROBLEM: I get numerous iptables logs of this kind:

Apr  4 14:52:18 kernel: [53326.219105] LOGALL IN=eth0 OUT= MAC=xxxxxxxxxxxx SRC=174.111.111.206 DST=192.168.1.5 LEN=44 TOS=0x00 PREC=0x00 TTL=244 ID=40132 PROTO=TCP SPT=179 DPT=443 WINDOW=5840 RES=0x00 SYN URGP=0   Apr  4 14:53:27 kernel: [53395.130551] LOGALL IN=eth0 OUT= MAC=xxxxxxxxxxxx  SRC=45.146.164.211 DST=192.168.1.5 LEN=44 TOS=0x04 PREC=0x00 TTL=247 ID=26977 PROTO=TCP SPT=55172 DPT=443 WINDOW=1024 RES=0x00 SYN URGP=0   

from tens of different IPs, every minute or so. I can tell from the LOG that they only send a SYN packet. I have used Wireshark to inspect the traffic and what i can tell is most dont answer after my server responds with SYN,ACK

54215   187.717006840   180.234.40.115  192.168.1.5 TCP 60  56412 ? 443 [SYN] Seq=0 Win=5840 Len=0 MSS=1460  54216   187.717251257   192.168.1.5 180.234.40.115  TCP 58  443 ? 56412 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460  54411   188.716638340   192.168.1.5 180.234.40.115  TCP 58  [TCP Retransmission] 443 ? 56412 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460  

I have tried many differnt ways with iptables to limit those SYN only packets. But those connections are not SYN floods attacks (i have limited SYN connections), but probably some crawlers and scans. I also tried https://inai.de/documents/Chaostables.pdf which gave me a lot of hope....but it didnt work or i cant get it to work.

I have also looked into fail2ban (havent used it yet), but since i write the entries to iptables myself and execute with iptables-restore < /etc/iptables/rules, and failt2ban uses iptables too, i dont know how would both work together.

QUESTION: Is it possible to block IPs that show up in iptables's LOG file but do NOT show in apache LOG files? (for me that would mean the IP did not come to my server to open the webpage, thus is doing smth else) example: 1.1.1.1 IP opens my webpage, meaning there will be LOG in iptables and in apache. BUT if that 1.1.1.1 IP only sends SYN packet to port 443, only iptables LOG will show that -> block that ip?

I hope i have been clear enough. Any help would be appreciated. Thank you

How to access files from a local server with infiniband without using IP

Posted: 04 Apr 2021 02:27 PM PDT

I want to start of by saying that I'm very inexperienced with everything I'm doing so please take it easy on me.

I have 4 computers each with Red hat Enterprise Linux server 7.8 installed, they are all connected to a Voltaire grid director 4036 infiniband switch thats running the subnet manager. For simplicity's sake I will call the computers s1-s4. I want s2, s3 and s4 to be able to access and download files from s1 without using IP since I've read that removing the use of IP's can improve performance by up to 20%(correct me on this if I'm wrong). Is it even possible to make file transfers without the use of IP?

Also, since I'm new to this site feel free to tell me if I should have given more information or was unclear about something. If you think I have misunderstood something then you can also feel free to correct me on it.

Thanks

LXD Container IPv4 Interface Management

Posted: 04 Apr 2021 08:51 PM PDT

Ubuntu 18.04.4
lxd 3.0.3
lxc 3.0.3

I have had several containers running without issue for a long time. Today I was making changes to my network and one of the containers picked up a DHCP address.

user@localhost:/tmp$ sudo lxc list host_a  +-----------------------+---------+--------------------------+------+------------+-----------+  |         NAME          |  STATE  |           IPV4           | IPV6 |    TYPE    | SNAPSHOTS |  +-----------------------+---------+--------------------------+------+------------+-----------+  | host_a                | RUNNING | 192.168.112.5 (vlan112)  |      | PERSISTENT | 3         |  |                       |         | 192.168.11.8 (eth0)      |      |            |           |  |                       |         | 192.168.11.193 (eth0)    |      |            |           |  +-----------------------+---------+--------------------------+------+------------+-----------+  

Interfaces 192.168.112.5 and 192.168.11.8 are original interfaces that have existed all along and need to remain. Interface 192.168.11.193 is the interface that appeared today during network changes and what I can't find to remove. I don't find it in the container and I can't figure how to remove it via lxc. I resorted to rebooting the container and the lxd host yet it remains.

Synchronise Azure Active Directory with OpenLDAP - possible?

Posted: 04 Apr 2021 09:05 PM PDT

we have a Sharepoint online site and an Azure Active Directory to manage our users. We also use OpenLDAP on a Linux server and I want to synchronize both of them, so everytime I make changes on the users on LDAP it synchronises these changes with Azure AD.

I hope this makes sense, thanks in advance!

Docker Host with multiple VLANs

Posted: 04 Apr 2021 05:02 PM PDT

Background Information

I have a server with one physical network interface that is running Docker. This interface is configured as a 802.1Q trunk. To avoid asymetric routing I configured routing tables for each subnet. Thats my interfaces /etc/network/interfaces :

auto enp3s0  iface enp3s0 inet dhcp      post-up ip route add 192.168.1.0/24 dev enp3s0 table 1      post-up ip route add default via 192.168.1.1 table 1      post-up ip rule add from 192.168.1.0/24 table 1 priority 101      post-up ip route flush cache      pre-down ip rule del from 192.168.1.0/24 table 1 priority 101      pre-down ip route flush table 1      pre-down ip route flush cache    auto enp3s0.2  iface enp3s0.2 inet dhcp          hwaddress ether 00:11:22:33:44:55          post-up ip route add 192.168.2.0/24 dev enp3s0.2 table 2          post-up ip route add default via 192.168.2.1 table 2          post-up ip rule add from 192.168.2.0/24 table 2 priority 102          post-up ip route flush cache          pre-down ip rule del from 192.168.2.0/24 table 2 priority 102          pre-down ip route flush table 2          pre-down ip route flush cache    auto enp3s0.4  iface enp3s0.4 inet dhcp          hwaddress ether 00:11:22:33:44:56          post-up ip route add 192.168.4.0/24 dev enp3s0.4 table 4          post-up ip route add default via 192.168.4.1 table 4          post-up ip rule add from 192.168.4.0/24 table 4 priority 104          post-up ip route flush cache          pre-down ip rule del from 192.168.4.0/24 table 4 priority 104          pre-down ip route flush table 4          pre-down ip route flush cache  ...  

This setup works fine, if I start container with the --net=host parameter. The containers are accessible from each subnet/vlan.

The Problem

I would like to have more control about the ports and the accessibility (not every docker should be reachable in every subnet). If I use the parameter -p (e.g. -p 3777:3777) the dockers are not reachable anymore.

This guide https://hicu.be/docker-networking-macvlan-vlan-configuration adress a simliar problem, but I do not want to extend my vlans to docker and assign an IP on each docker instance. This is to much.

Desired solution

My server has an IP in every subnet/vlan, 192.168.1.199 (native vlan / mgmt) 192.168.2.199 (vlan2) 192.168.4.199 (vlan4)

I would like to start dockers with the -p paramenter and will choose on wich interface it is accessible. e.g. docker run --p 9000:9000 --name portainer ... and it should only accessible through 192.168.1.199:9000

Maybe my ip route / ip rule settings are not well configured or/and I need a docker bridge for each subnet...but that's the point where I can not get any further. Up to now, if I choose the --p parameter and the docker is connected to the default docker bridge...the docker is not accessible at all.

Do you have any idea?

Greets, Mark

Edit: The container portainer_test is not accessible via 192.168.4.199:9001

   mark@server:~/docker$ docker ps -a          CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                  PORTS                          NAMES          359ebfd027b2        portainer/portainer         "/portainer -H unix:…"   21 minutes ago      Up About a minute       192.168.4.199:9001->9000/tcp   portainer_test          9d523a8b22e4        eclipse-mosquitto           "/docker-entrypoint.…"   10 days ago         Up 16 hours                                            mosquito          a2eeb9582838        portainer/portainer         "/portainer"             10 days ago         Up 16 hours                                            portainer          f4ef7570cea2        symcon/symcon:stable        "/usr/bin/symcon"        10 days ago         Up 16 hours                                            symcon          ae43e8be871f        jacobalberty/unifi:stable   "/usr/local/bin/dock…"   10 days ago         Up 16 hours (healthy)                                  unifi          mark@server:~/docker$ sudo netstat -tulpn | grep LISTEN          tcp        0      0 127.0.0.1:27117         0.0.0.0:*               LISTEN      23374/bin/mongod          tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      30474/systemd-resol          tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1592/sshd          tcp        0      0 0.0.0.0:1883            0.0.0.0:*               LISTEN      22212/mosquitto          tcp        0      0 0.0.0.0:3777            0.0.0.0:*               LISTEN      22247/symcon          tcp        0      0 192.168.4.199:9001      0.0.0.0:*               LISTEN      18622/docker-proxy          tcp6       0      0 :::8843                 :::*                    LISTEN      22511/java          tcp6       0      0 :::8880                 :::*                    LISTEN      22511/java          tcp6       0      0 :::8080                 :::*                    LISTEN      22511/java          tcp6       0      0 :::8443                 :::*                    LISTEN      22511/java          tcp6       0      0 :::1883                 :::*                    LISTEN      22212/mosquitto          tcp6       0      0 :::6789                 :::*                    LISTEN      22511/java          tcp6       0      0 :::9000                 :::*                    LISTEN      22273/portainer  

How to reduce latency with Nginx RTMP streaming server

Posted: 04 Apr 2021 07:02 PM PDT

My Virtual Server is configured with 3GB memory, and 1 core.

I'm playing the following mp4 file Sample MP4 Video File through my NGINX RTMP server, as small.mp4. I'm experiencing a latency issue.

Here is my nginx.conf

rtmp {      server {          listen 1935;          chunk_size 4000;          # video on demand for flv files          application live {          play /usr/local/nginx/html;      }        # video on demand for mp4 files      application live360 {          play /usr/local/nginx/html;      }      }  }    # HTTP can be used for accessing RTMP stats  http {      access_log /var/log/nginx/access-streaming.log;      error_log /var/log/nginx/error-streaming.log;        server {      # in case we have another web server on port 80      listen 8080;        # This URL provides RTMP statistics in XML      location /stat {          rtmp_stat all;          rtmp_stat_stylesheet stat.xsl;      }        location /stat.xsl {          # XML stylesheet to view RTMP stats.          # Copy stat.xsl wherever you want          # and put the full directory path here          root /usr/local/nginx/html;      }        location /hls {          # Serve HLS fragments          types {              application/vnd.apple.mpegurl m3u8;              video/mp2t ts;          }          alias /tmp/app;          expires -1;      }  }      

Try to reverse-proxy vsphere webclient with Apache

Posted: 04 Apr 2021 05:02 PM PDT

We want to protect our VMWare vsphere 6.5 web client with an already existing & working Apache 2.4 reverse proxy (benefits e.g. centralised monitoring, mod_security et.al.)

Both communications client <--> proxy, and proxy <--> backend (= vsphere) must be be TLS secured. Certificates are in place and ok. DNS is configured accordingly.

Clients can already access the vsphere start page via proxy successfully e.g. https:// vsphere.domain.tld/

Firefox' network analyses shows that all request are fine and accepted, e.g.

    302 GET /vsphere-client/ [FQDN] document html  

until /vsphere-client/UI.swf

But as soon as a user clicks on the link "vSphere Web Client (Flash)" in order to authenticate and enter the menues, a status code 400 is thrown. The "vSphere Web Client (Flash)" link directs to /vsphere-client/ and obviously invokes a SAML request.

    400 GET https://vsphere.domain.tld/websso/SAML2/SSO/vsphere.local?SAMLRequest=zVRba9sw[...] [FQDN] subdocument  

vsphere sso log shows:

    tomcat-http--38 ERROR org.opensaml.common.binding.decoding.BaseSAMLMessageDecoder] SAML message intended destination endpoint 'https://vsphere-internal.domain.tld/websso/SAML2/SSO/vsphere.local' did not match the recipient endpoint 'https://vsphere.domain.tld/websso/SAML2/SSO/vsphere.local'  

Virtual host conf on Apache reverse proxy so far (excerpt) :

    SSLProxyEngine on      ProxyPreserveHost on      ProxyRequests off      ProxyPass        / https://vsphere.domain.tld/      ProxyPassReverse / https://vsphere.domain.tld/        ProxyPass        /vsphere-client https://vsphere.domain.tld/vsphere-client/      ProxyPassReverse /vsphere-client https://vsphere.domain.tld/vsphere-client/      ProxyPass        /websso/SAML2/SSO https://vsphere.domain.tld/websso/SAML2/SSO/      ProxyPassReverse /websso/SAML2/SSO https://vsphere.domain.tld/websso/SAML2/SSO/        # new, to solve the name binding problem (see 1st answer)      RequestHeader set Host "vsphere-internal.domain.tld"  

With the last "RequestHeader" addendum - which in effect just reverses the PreserveHost option - I am now able to see the vsphere login page, and to log in, but the page then stucks again:

    tomcat-http--10 ERROR com.vmware.identity.BaseSsoController] Could not parse tenant request java.lang.IllegalStateException: org.opensaml.xml.security.SecurityException: SAML message intended destination endpoint did not match recipient endpoint  

Any proposals how to get the full page?

Exchange 2010 Receive Connector configuration

Posted: 04 Apr 2021 03:06 PM PDT

I am having a very hard time getting clear about receive connectors in Exchange 2010, which I have unhappily inherited.

I have read a lot of articles and books, but nothing presents the information I need in a clear way, and some of the articles are conflicting, which is expected but doesn't make this any easier.

The reason for asking these questions together is that some of them impact others, and if asked separately they would probably not make clear what I need to accomplish.

My Exchange server is Hub role internet-facing. There is no edge or filter between it and the outside.

I recently implemented split DNS, and I want to know if the "Server" and "Fqdn" attributes should be changed to the public DNS name for my mail server, as in "mail.domain.com".

The "Name" attribute is only a label for the connector which shows in the Exchange Management Console.

The "Identity" attribute is related to the GUID. Again, it currently shows as \<"Name" attribute>. The question is whether I can change that to the public DNS name for my mail server, "mail.domain.com".

I need to know if the DistinguishedName attribute can or should also be changed.

Some of my existing connectors have a value for the "DistinguishedName" attribute which reads "CN=,CN=SMTP Receive Connectors,CN=Protocols,CN=,CN=Servers,CN=Exchange Administrative Group,(...),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=,DC=". I know that at one point a migration was done from Exchange 2003, and that it was not cleaned up, so to speak. I need to know if I can simply remove connectors with that value, or whether they need to be replaced with other ones.

The "Default" connector, as I understand it, receives email from the internet on port 25 from any IP (0.0.0.0-255.255.255.255).

I need to understand how AuthMechanism, RequireTLS and PermissionGroups relate to each other, and where RequireTLS is appropriate, as in I don't want to lose email by forcing TLS.

I want to configure TLS. I need to understand where "opportunistic" vs "mutual" applies, as in do I use it for both internet and internal, or only internet.

I want to understand which values should be set for the "AuthMechanism" attribute, and why, given that I want to configure TLS.

I have internal applications which I think need separate connectors, from what I have read. I need to know if that is true, and how to configure that.

I have multifunction printers which send scans via email, which I think also need separate connectors, and I need to know how to configure that.

I have a third party who needs to send email using my DNS name and IP, which I think is called "relay". I need to know if that is correct, and how to configure it.

Links to articles which don't present specific instructions on how to accomplish what I listed above are not helpful.

Explanations of how these connectors are used for my specific needs are very helpful, and I appreciate the help.

Why does a RewriteCond %{REQUEST_URI} interfere with a second NOT condition?

Posted: 04 Apr 2021 04:05 PM PDT

At first the rule that works:

DirectoryIndex index.php  ErrorDocument 403 /form.html    RewriteCond %{REQUEST_URI} ^/index\.php$  RewriteCond %{REQUEST_METHOD} !POST  RewriteRule . - [F,L]  

This means http://example.com and http://example.com/index.php can only be opened through POST.

Now the problem

I added this additional rule set:

RewriteCond %{REQUEST_URI} !^/index\.php$  RewriteRule . - [F,L]  

Now, I send a POST again to http://example.com but receive this error:

Forbidden    You don't have permission to access / on this server.  Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.  

This does not make sense, because the rule should NOT catch requests on index.php sending a 403, but ok, I extended the second rule as follows:

RewriteCond %{REQUEST_URI} !^/form\.html$  RewriteCond %{REQUEST_URI} !^/index\.php$  RewriteRule . - [F,L]  

And sending again a POST to http://example.com returns no 500, but I still receive a 403?!

Update 1
If I remove the first rule set, the second one works alone as expected. This means only http://example.com, http://example.com/index.php and http://example.com/form.html can be accessed.

Update 2
If I use both rule sets and send my POST to http://example.com/index.php I do not receive any errors?!

So the rules interfere only if I sent a POST to the root URL. But why?

Ubuntu 16.04 server is listening but not accepting incoming requests

Posted: 04 Apr 2021 09:05 PM PDT

I recently upgraded my server from Ubuntu 14.04 to 16.04. The upgrade seemed successful (so I'm not sure if it's related) but about a week in I restarted the host and it now will not accept remote requests.

I can connect to the terminal using my hosting provider's console access but I can't remotely SSH into the machine. Once in the machine I can ping myhost.com successfully, but I cannot ping the machine from a remote location. Pinging from my development machine requests in Request timed out.

I tried a previous loading a previous snapshot from before I upgraded my OS and I can ping the machine successfully.

I've tried tailing /var/log/auth.log but the log is not updated when I try to access remotely.

I'm not sure what to try next to find out why my server is not responding.

EDIT

Running iptables -nvL results in:

modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.4.0-28-generic  iptables v1.6.0: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)  Perhaps iptables or your kernel needs to be upgraded.  

How to make /mnt/resource readable/writable but other users in Azure Linux VM?

Posted: 04 Apr 2021 08:09 PM PDT

Right now /mnt/resource is owned by root and only root can read/write. How can I make this readable/writable by other users on the system?

And this should be persistent (i.e. after system restart it should still work)

No package mongodb-org available

Posted: 04 Apr 2021 08:09 PM PDT

I am attempting to install MongoDB on CentOS 6.5. I believe I am following the instructions precisely step-by-step, but continue to get the error No package mongodb-org available after issuing the command sudo yum install -y mongodb-org.

Following the instructions here: http://docs.mongodb.org/master/tutorial/install-mongodb-on-red-hat/?_ga=1.140464624.273085478.1441642123

[vagrant@localhost lounge]$ sudo yum install -y mongodb-org  Loaded plugins: fastestmirror  Setting up Install Process  Loading mirror speeds from cached hostfile   * base: centos.host-engine.com   * epel: ftp.osuosl.org   * extras: ftp.osuosl.org   * remi-safe: mirrors.mediatemple.net   * updates: mirror.solarvps.com  No package mongodb-org available.  Error: Nothing to do  

I'm looking at the instructions for RHEL 6, and this is what I have (am I using the correct instructions?):

[vagrant@localhost yum.repos.d]$ cat /proc/version  Linux version 2.6.32-431.el6.i686 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Nov 22 00:26:36 UTC 2013  

Likewise, the command yum search mongodb-org says No matches found.

Here is my repo file as the instructions said to create:

[vagrant@localhost yum.repos.d]$ cat mongodb-org-3.0.repo  [mongodb-org-3.0]  name=MongoDB Repository  baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/  gpgcheck=0  enabled=1  [vagrant@localhost yum.repos.d]$  

Permanent Workaround

I upgraded the OS from CentOS 6.5 to Centos 7, followed subtly different instructions:

http://docs.mongodb.org/master/tutorial/install-mongodb-on-red-hat/?_ga=1.169228258.273085478.1441642123

The repo file is the same, but for whatever reason CentOS 7 procedures worked without a hitch.

Note: I don't think this is an answer, just a workaround, so if someone can say why the 6.5 procedures didn't work, that would be the actual answer.

Zabbix ssh needs to force pseudo tty allocation

Posted: 04 Apr 2021 06:01 PM PDT

I am currently trying to configure an item in zabbix to execute a check on a remote server via SSH. When I run the following command on the zabbix box it works

sudo -u zabbix ssh -t root@[remote_ip] 'sudo ls'  

However when I run this

sudo -u zabbix ssh root@[remote_ip] 'sudo ls'  

I get sudo: sorry, you must have a tty to run sudo. I understand this is because I have not forced tty. My question is, how can I get the zabbix ssh.run item 'key' to force tty? Preferably, we'd rather not be making any updates to the remote host.

External ip address being treated as local (ARP asking for external IP's address)

Posted: 04 Apr 2021 03:06 PM PDT

I am using a virtual machine with OpenWRT for routing, on a Linux machine (Slackware). I am trying to configure a host only interface (eth0) as the wan interface. eth1 is Ethernet attached as a bridge interface.

I tried to test the configuration pinging to an external ip address (from OpenWRT).

# ping -I eth0 8.8.8.8  PING 8.8.8.8 (8.8.8.8): 56 data bytes  ^C  --- 8.8.8.8 ping statistics ---  6 packets transmitted, 0 packets received, 100% packet loss  

I used wireshark and i saw the system (OpenWRT side) is sending ARP request asking for the mac address of 8.8.8.8. What is going on? It looks that the gateway is being ignored.

My route:

# route  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  default         192.168.56.1    0.0.0.0         UG    1      0        0 eth0  default         192.168.56.1    0.0.0.0         UG    5      0        0 eth0  192.168.0.0     *               255.255.255.0   U     0      0        0 eth1  192.168.56.0    *               255.255.255.0   U     5      0        0 eth0  

Iptables is configure to accept all packets for input, output and forward.

More tests:

# ping -I eth0 192.168.56.1  PING 192.168.56.1 (192.168.56.1): 56 data bytes  64 bytes from 192.168.56.1: seq=0 ttl=64 time=10.000 ms  64 bytes from 192.168.56.1: seq=1 ttl=64 time=0.000 ms  64 bytes from 192.168.56.1: seq=2 ttl=64 time=0.000 ms  64 bytes from 192.168.56.1: seq=3 ttl=64 time=0.000 ms  ^C  --- 192.168.56.1 ping statistics ---  4 packets transmitted, 4 packets received, 0% packet loss  round-trip min/avg/max = 0.000/2.500/10.000 ms  

.

# ip route show  default via 192.168.56.1 dev eth0  proto static  metric 1   default via 192.168.56.1 dev eth0  proto static  metric 5   192.168.0.0/24 dev eth1  proto kernel  scope link  src 192.168.0.1   192.168.56.0/24 dev eth0  proto static  scope link  metric 5   

Linux IPSec between Amazon EC2 instances on same subnet

Posted: 04 Apr 2021 04:05 PM PDT

I have a requirement to secure all communications between our Linux instances on Amazon EC2 - we need to treat the EC2 network as compromised and therefore want to protect the data that's being transferred within the EC2 subnet(s). The instances to secure will all be on the same subnet. I'm a Windows bod with limited Linux abilities, so am familiar with IPSec terminology and can find my way around Linux, but haven't got a clue when it comes to setting up Linux IPSec environments.

Can anyone throw me some information for setting up IPSec between all (Linux) hosts on a subnet please? I can only find information that pertains to site-to-site connections, or host-to-host connections and nothing that covers all Lan communication. We're currently using OpenSwan for site-to-site VPNs if that helps.

Updated with more information

This is an example config (very basic to connect between two hosts using a pre-shared key):

    conn test      type=tunnel      auto=start      authby=secret      left=10.0.2.4      right=10.0.2.5      pfs=yes  

If I now want to secure all traffic between 4 hosts for instance (or 8,10,100 etc), is there a way to make the left and right parameters more generic, so they mean 'encrypt traffic between all hosts' rather than having to explicitly specify a left and right host.

My goal would be to achieve a generic configuration that has no hardcoded host IP's (subnets would be OK), so that we could include the configuration in our EC2 image.

Thanks Mick

Randomly slow MySQL queries

Posted: 04 Apr 2021 02:02 PM PDT

I know this type of question come often. But I have done a lot of research, tried a lot of different settings, but still have the same issue: queries that usually are very fast can take 3s to 5s seemingly randomly.

The server is an i7-3770 (8 cores) with 32GB RAM. The CPU usage is about 50% idle, not CPU spike. No swap used, free memory is about 10GB in average. I run mysql 5.5.32 on CentOS 6.

9GB of RAM has been allocated for MySQL, it uses about 2GB. All data should fit in memory (600MB of data, 700MB of index).

Number of queries per second in average (no real spike):

  • 1.5 SELECT
  • 0.2 UPDATE
  • 0.05 INSERT

Here is an example of query that takes just a few ms, but sometimes more than 3s:

# Query_time: 4.337884  Lock_time: 0.050146 Rows_sent: 1  Rows_examined: 1  SELECT me.id, me.url, me.filename, me.instance_id, me.virtual_id, me.status, me.user_id, me.time_added, me.time_finished, me.priority, me.size, me.delay, me.flash_delay, me.tries, me.details, me.json_file, me.html, me.shots, me.shot_interval, me.screen_width, me.screen_height FROM Screenshots me WHERE ( me.id = '5992705' );   

id is a primary key.

Although I have more SELECT than INSERT queries, I have more slow INSERT than SELECT

What I have tried and tested:

  • Make sure all required indexes are there, but no redundant ones and no one unused
  • no CPU spike at the time, no IO spike, no swap
  • 2nd instance of MySQL as slave, most SELECT queries are done on the slave
  • remove and TEXT and equivalent data type
  • tune my.cnf

Tuning my.cnf helped a lot. I tried with query cache enabled and disabled, not much difference.

Using a slave for SELECT made things actually worst: I had fewer slow queries on the master, but they could go up to 12s!

Here is my current my.cf (with query cache in this case):

tmp_table_size                 = 32M  max_heap_table_size            = 32M  query_cache_type               = 1  query_cache_size               = 1M  thread_cache_size              = 50  open_files_limit               = 65535  table_definition_cache         = 1024  table_open_cache               = 4096    innodb_flush_method            = O_DIRECT  innodb_log_files_in_group      = 2  innodb_log_file_size           = 256M  innodb_log_buffer_size         = 8M  innodb_thread_concurrency      = 8  innodb_flush_log_at_trx_commit = 0  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 9G    max_connections=1000  transaction-isolation          = READ-UNCOMMITTED   innodb_locks_unsafe_for_binlog = 1  innodb_io_capacity             = 1000  innodb_change_buffering        = inserts  innodb_fast_shutdown           = 0    key_buffer_size                = 2G  

I'm out of ideas. I could not find any patterns (frequency, interval, etc.) that would explain these slow queries.

Windows 2008 R2 server loses connection to Active Directory

Posted: 04 Apr 2021 06:01 PM PDT

One of our many 2008R2 servers constantly loses connection to the domain, meaning that users cannot login to shares etc on the server, and it basically becomes useless.

The event sometimes generated by the server when this happens is 3210 with the error code 0xC0000022.

This does however not always happen.

Running NETDOM RESETPWD /Server:AD01 /UserD:domainadmin /PasswordD:domainadminpassword makes it work again until it dies again. Sometimes minutes after, sometimes days after.

We have also tried the usual unjoin/rejoin domain on the server, without success.

This is happening 4-6 times a day at the moment, so it is quite a big issue.

The only services run by the server are file sharing and printing services.

How to stop Sendmail sending mail from IPv6 instead of IPv4

Posted: 04 Apr 2021 05:53 PM PDT

Today I noticed that Gmail sends all messages received from my server to the Spam folder. I checked message header and found the following:

Authentication-Results: mx.google.com;         spf=neutral (google.com: 2001:4ba0:cafe:........ is neither permitted nor denied by best guess record for domain of root@myserver.com) smtp.mail=root@myserver.com  

So, it looks that Sendmail is sending mail from IP6 adress insrtead of IPv4 and there is no SPF and PTR records for IPv6. How do I force Sendmail to send mail from IPv4?

Thanks.

Powershell error when adding filepath

Posted: 04 Apr 2021 07:02 PM PDT

The script looks like this:

$searchOU='ou=Servers,dc=mydomain,dc=NET'  Get-ADComputer -filter * -SearchBase $searchOU |     Foreach-Object {      $server = $_.Name      ([ADSI]"WinNT://$($_.Name)/Administrators").psbase.invoke('Members') |         ForEach-Object {              $user = $_.GetType().InvokeMember('Name', 'GetProperty', $null, $_, $null)                New-Object 'PSObject' -property @{'Server'=$server; 'Admin'=$user}  | Format-Table -AutoSize Server, Name | Out-File C:\Scripts\servers.txt         }   }  

If I remove this part after New-Object...

 | Format-Table -AutoSize Server, Name | Out-File C:\Scripts\servers.txt  

...the script works perfect. When adding the above mentioned line I get this error for all the servers / members it finds:

Exception calling "Invoke" with "2" argument(s): "The network path was not found. " At C:\scripts\myscript.ps1:5 char:62 + ([ADSI]"WinNT://$($_.Name)/Administrators").psbase.invoke <<<< ('Members') | + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException

No comments:

Post a Comment