Saturday, May 14, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Force non-empty MAIL FROM for outgoing email

Posted: 14 May 2022 05:57 AM PDT

I am using postfix on a Debian server, primarily to send outgoing email for websites and notifications, and have run into an issue where outgoing email to a certain mailing list provider is rejected but email to "normal" individual email addresses goes through just fine.

I've been informed this is due to an empty MAIL FROM, since that typically indicates a bounce or a spammer, which mailing lists don't accept. However, this isn't a bounce - it's the initial outgoing message. However, it does seem if I debug the SMTP session, MAIL FROM is empty:

May 14 12:41:49 mail postfix/smtp[13274]: > REDACTED[REDACTED]:25: MAIL FROM:<>  May 14 12:41:49 mail postfix/smtp[13274]: smtp_stream_setup: maxtime=300 enable_deadline=0  May 14 12:41:49 mail postfix/smtp[13274]: < REDACTED[REDACTED]:25: 250 2.0.0 OK  May 14 12:41:49 mail postfix/smtp[13274]: > REDACTED[REDACTED]:25: RCPT TO:<redacted@example.com>  May 14 12:41:49 mail postfix/smtp[13274]: smtp_stream_setup: maxtime=300 enable_deadline=0  May 14 12:41:49 mail postfix/smtp[13274]: < REDACTED[REDACTED]:25: 500 Bad bounce  

The mail itself is queued locally on the same sever using the mail function in PHP. It contains both From and Sender headers.

I don't know why postfix isn't sending a MAIL FROM, but I suspect it could be due to this other reason:

almost all bounce messages use this as well as certain other circumstances, to indicate they do not wish to receive a bounce message in the event of a delivery error

https://lists.debian.org/debian-isp/2004/01/msg00259.html

However, in this case, indicating that it doesn't want a bounce is breaking outgoing email to certain destinations that require a non-empty MAIL FROM.

How can I force it to send a MAIL FROM, such as from a specific address if necessary?

IIS Send 200 Instead of 405 when POST to a Static File

Posted: 14 May 2022 04:00 AM PDT

I am trying to deploy a static network speed test application. IIS need to behave like This Nginx Config.

Everything working fine, but I need to send 200 instead of 405. When Running Upload Test. (POST request to a Static File)

Looking for ISS Equivalent like Nginx "error_page 405 =200"

<?xml version="1.0" encoding="UTF-8"?>  <configuration>      <system.webServer>               <staticContent>              <mimeMap fileExtension=".webmanifest" mimeType="application/manifest+json" />          <mimeMap fileExtension="." mimeType="application/octet-stream" />          </staticContent>    <urlCompression doStaticCompression="false" />      <httpProtocol>              <customHeaders>                  <add name="Cache-Control" value="no-store, no-cache, no-transform, must-revalidate" />              </customHeaders>          </httpProtocol>          <caching enabled="false" enableKernelCache="false" />    <security>              <requestFiltering>                  <verbs>                      <add verb="POST" allowed="true" />                      <add verb="GET" allowed="true" />                  </verbs>                  <fileExtensions>                      <add fileExtension="." allowed="true" />                  </fileExtensions>                  <alwaysAllowedUrls>                  </alwaysAllowedUrls>          <requestLimits maxAllowedContentLength="2147483647"/>              </requestFiltering>          </security>   </system.webServer>  </configuration>  

What email server/tech to use to secure email history

Posted: 14 May 2022 03:10 AM PDT

Context:

  • A company where emails content are critical.
  • Today we use PO3P without options to "delete mail on serveur"; that is a weak protection against "I deleted the mail on my phone by mistake and now it's lost forever".

How to ensure that the history of received emails is not erased by mistake or intentionally ? (i.e. mail server, provider, backup solution, external tool)

Command logging for chroot ssh users

Posted: 14 May 2022 02:40 AM PDT

I have a ubuntu server that allows users access via ssh. When they log in they are contained to their chroot directory.

I'm looking for a way to log commands used by the users. I've tried using snoopy but it doesn't log commands for users in chroot.

It there any possible solution for this ? The only similar resources I've found have been for sftp. Would greatly appreciate any advice, thanks.

Named. service is not running on centos7

Posted: 14 May 2022 02:51 AM PDT

I'm was trying to ceate hostnames but after 48 hours i checked that the nameservers are not pointing towards my server. So i checked the named.service status it returned with this log- please help me-

[root@103-159-66-155 ~]# systemctl status named  ● named.service - Berkeley Internet Name Domain (DNS)     Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)    Drop-In: /etc/systemd/system/named.service.d             └─cpanel.conf     Active: failed (Result: exit-code) since Sat 2022-05-14 11:54:22 IST; 4min 36s ago   Main PID: 6116 (code=exited, status=0/SUCCESS)  May 14 11:54:22 mercury.planetserver.cloud systemd[1]: Starting Berkeley Internet Name Domain (DNS)...  May 14 11:54:22 mercury.planetserver.cloud bash[6977]: /etc/named.conf:16: missing ';' before 'listen-on-v6'  May 14 11:54:22 mercury.planetserver.cloud systemd[1]: named.service: control process exited, code=exi...s=1  May 14 11:54:22 mercury.planetserver.cloud systemd[1]: Failed to start Berkeley Internet Name Domain (DNS).  May 14 11:54:22 mercury.planetserver.cloud systemd[1]: Unit named.service entered failed state.  May 14 11:54:22 mercury.planetserver.cloud systemd[1]: named.service failed.  Hint: Some lines were ellipsized, use -l to show in full.  

How to move PAM from DR to primary machine without any downtime?

Posted: 13 May 2022 11:44 PM PDT

My environment is : Primary site : 2 MBX and 2 CAS -- fsw \srv01\dag DR Site : 1 MBX and 1 CAS --alternate fsw \srvdr1\dag my question is : I want to move primary mailbox node (pr-mbx-01 )without downtime. is it possible ?

It shows WitnessShare InUse: PRIMARY Get-DatabaseAvailabilityGroup -Status | fl :

Group Node Status


Cluster Group DR-mbx-01 Partially Online Available Storage PR-MBX-02 Offline

modoboa on ubuntu 20.04 : supervisord exited: policyd (exit status 1; not expected)

Posted: 13 May 2022 10:50 PM PDT

New modoboa 2.o install on ubuntu 20.04 doesnt work and giving error in syslog.

supervisord  exited: policyd (exit status 1; not expected)  

Tried everything no clue

NGNIX enforce HTTPS

Posted: 14 May 2022 04:49 AM PDT

for my Webapp (Angular App) we are using NGNIX as web server. I have a task where I need to make sure all assets/images are loaded over HTTPS.

In the Browser Dev tools, I see the request is sent over HTTPS. However, the response location header is coming back as an HTTP URL (see screenshot below).

screenshot from browser dev tools

Here are the current NGNIX Configs:

server {      listen       80;      server_name  localhost;      root         /usr/share/nginx/html;        # kill cache      add_header Last-Modified $date_gmt;      add_header Cache-Control 'no-store, no-cache';      if_modified_since off;      expires off;      etag off;        # Enforce HSTS      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;        # Disable iFrames      add_header x-frame-options "SAMEORIGIN" always;        # detect and reject CRLF      if ($request_uri ~* "%0A|%0D" ) {        return 400;      }        # Fallback to default language if no preference defined by browser      if ($accept_language ~ "^$") {        set $accept_language "de";      }        # Redirect "/" to Angular app in browser's preferred language      rewrite ^/$ /$accept_language permanent;        if ($uri !~ ^/(en-US|de)) {        return 301 /$accept_language$uri$args;      }        # Everything under the Angular app is always redirected to Angular in the correct language      location ~ ^/(en-US|de) {          try_files $uri$args $uri$args/ /$1/index.html;          # Add security headers from separate file        # include /etc/nginx/security-headers.conf;      }        location /health {        access_log off;        return 200;        add_header Content-Type text/plain;        # Enforce HSTS        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;      }  }  

Any help is highly appreciated. Thanks

Exchange 2019 refusing attachments within permitted size

Posted: 14 May 2022 05:40 AM PDT

I have external users (different mail host) trying to send messages to internal users on our Exchange 2019 server. The attachments are large (7MB-10MB before Base64 encoding), and the senders are receiving the following error after sending:

Remote Server returned '552 5.3.4 Message size exceeds fixed limit'  

I checked my settings as show below, and my exchange server should be accepting messages up to 25MB. Can someone explain how to diagnose/resolve this?

As well, does this look like an exchange message? I have a proxy (ASSP) in front of Exchange, but the proxy does not report any errors/issues. I can't find this exact message in ASSP so I'm pretty sure its coming from exchange 2019

[PS] C:\Users\administrator.MYDOMAIN\Desktop>Get-TransportConfig | Format-List MaxReceiveSize,MaxSendSize,MaxRecipientEnvelopeLimit      MaxReceiveSize            : 25 MB (26,214,400 bytes)  MaxSendSize               : 25 MB (26,214,400 bytes)  MaxRecipientEnvelopeLimit : 500        [PS] C:\Users\administrator.MYDOMAIN\Desktop>Get-TransportRule | where {($_.MessageSizeOver -ne $null) -or ($_.AttachmentSizeOver -ne $null)} | Format-Table Name,MessageSizeOver,AttachmentSizeOver  [PS] C:\Users\administrator.MYDOMAIN\Desktop>Get-ReceiveConnector | Format-Table Name,Max*Size,MaxRecipientsPerMessage; Get-SendConnector | Format-Table Name,MaxMessageSize; Get-AdSiteLink | Format-Table Name,MaxMessageSize; Get-DeliveryAgentConnector | Format-Table Name,MaxMessageSize; Get-ForeignConnector | Format-Table Name,MaxMessageSize    Name                             MaxHeaderSize          MaxMessageSize           MaxRecipientsPerMessage  ----                             -------------          --------------           -----------------------  Default EXCHANGE                 256 KB (262,144 bytes) 36 MB (37,748,736 bytes)                    5000  Client Proxy EXCHANGE            256 KB (262,144 bytes) 36 MB (37,748,736 bytes)                     200  Default Frontend EXCHANGE        256 KB (262,144 bytes) 36 MB (37,748,736 bytes)                     200  Outbound Proxy Frontend EXCHANGE 256 KB (262,144 bytes) 36 MB (37,748,736 bytes)                     200  Client Frontend EXCHANGE         256 KB (262,144 bytes) 36 MB (37,748,736 bytes)                     200        Name           MaxMessageSize  ----           --------------  ASSP Smarthost 35 MB (36,700,160 bytes)        Name              MaxMessageSize  ----              --------------  DEFAULTIPSITELINK Unlimited        Name                                    MaxMessageSize  ----                                    --------------  Text Messaging Delivery Agent Connector Unlimited      [PS] C:\Users\administrator.MYDOMAIN\Desktop>  [PS] C:\Users\administrator.MYDOMAIN\Desktop>$mb= Get-Mailbox -ResultSize unlimited; $mb | where {$_.RecipientTypeDetails -eq 'UserMailbox'} | Format-Table Name,MaxReceiveSize,MaxSendSize,RecipientLimits    Name                MaxReceiveSize MaxSendSize RecipientLimits  ----                -------------- ----------- ---------------  U1      Unlimited      Unlimited   Unlimited  U2      Unlimited      Unlimited   Unlimited  U3           Unlimited      Unlimited   Unlimited      [PS] C:\Users\administrator.MYDOMAIN\Desktop>  

Server works, domain does not with VPS [closed]

Posted: 14 May 2022 02:57 AM PDT

I have to deal with clear VPS with Ubuntu 16.04; Apache (and domain).

The server itself works fine via ip-address. I have successfully installed php via ssh, so now I even have some Linux experience.

But the domain doesn't work. Trying to access it just with browser I get DNS_PROBE_FINISHED_NXDOMAIN. The same through Hotspot shield vpn, but with Zen mate I get dial tcp: lookup mistod.com on 127.0.0.11:53: server misbehaving, although it is not server's ip.

In host provider site I set it's nameservers for domain. Via ssh in Apache I created and enabled config for that domain, specifying it in ServerName and ServerAlias attributes.

Googling, I found solution, where man wrotes that the ssl should be installed, so I went to Let's encrypt, followed, instruction for certbot, but stuck in command sudo snap install core; sudo snap refresh core: it gives error system does not fully support snapd: cannot mount squashfs image using... .

I also tried to install certbot via apt-get, and it installed, then I installed Apache plugin, but the final command sudo certbot --apache returns An unexpected error occurred.

Is it something in hosting-provider-side, so I need to contact them, or it's me doing wrong something?

Apple client unable to login with LDAP backend and GSSAPI or PLAIN

Posted: 14 May 2022 05:34 AM PDT

I have a OpenLDAP server with Kerberos5 for authentication and on Linux/Unix/Windows environments I am able to login without a problem. The LDAP server is configured to use GSSAPI or PLAIN that passes trough SASL2 the password to PAM that authenticates against KERBEROS. This is due some server software do not support GSSAPI directly yet. On macOS (latest Monterey) I am able to get ID of the users and do ldapsearch (GSSAPI) in to the LDAP server. In ssh I have enabled GSSAPI login with cleaning of credentials and I have PAM auth set to yes.

It seem that the underlying Unix (BSD variant) works fine with LDAP but macOS overlay does something funny.

I have disabled all other authentication methods except GSSAPI and PLAIN with:

/usr/libexec/PlistBuddy -c "add ':module options:ldap:Denied SASL Methods:' string <METHOD_NAME>" /Library/Preferences/OpenDirectory/Configurations/LDAPv3/<servername>.plist  

I discovered this discussion did not solve my problem.

It seems that Apple LDAP client for LOGIN tries to get Kerberos5 ticket with ldap user info instead of just user info (LOG):

CLIENT_NOT_FOUND: uid=foobar,ou=People,dc=foobarbar,dc =com@FOOBARBAR.COM for krbtgt/FOOBARBAR.COM@FOOBARBAR.COM, Client not found in Kerberos database  

Any tips would be highly appreciated!

Apache "can't locate API module structure" with php73

Posted: 14 May 2022 04:00 AM PDT

newbie here. I was working on installing a LAMP environment on Manjaro for testing, trying to use older versions to match the production environment I have to work with (PHP 7.3, MySQL 5.6; got them from the AUR) when I had a freeze and was forced to hard reboot; since then, I've been having a weird error with Apache ; the service now refuses to start.

When I start the service with sudo systemctl start httpd, I don't get an error, but when I use sudo systemctl status httpd to check its status after that, I see this :

● httpd.service - Apache Web Server       Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)       Active: failed (Result: exit-code) since Thu 2021-05-27 09:34:31 CEST; 7s ago      Process: 82377 ExecStart=/usr/bin/httpd -k start -DFOREGROUND (code=exited, status=1/FAILURE)     Main PID: 82377 (code=exited, status=1/FAILURE)    mai 27 09:34:31 gregoire-x751lx systemd[1]: Started Apache Web Server.  mai 27 09:34:31 gregoire-x751lx httpd[82377]: httpd: Syntax error on line 190 of /etc/httpd/conf/httpd.conf: Can't locate API module structure `php73_module' in file /etc/httpd/modules/libphp73.so: /etc/httpd/modules/libphp73.so: undefined symbol: php73_module  mai 27 09:34:31 gregoire-x751lx systemd[1]: httpd.service: Main process exited, code=exited, status=1/FAILURE  mai 27 09:34:31 gregoire-x751lx systemd[1]: httpd.service: Failed with result 'exit-code'.  

Line 190 and 191 of httpd.conf :

LoadModule php73_module modules/libphp73.so  AddHandler php73-script .php  

libphp73.so exists in the correct location and I didn't have this error until the crash, so I tried reinstalling the php73, php73-apache and phpmyadmin packages using yay, thinking something must have changed that file somehow. Didn't change anything.

Please, what else should I try ? I'm really new to Linux and very inexperienced in server management in general, so I'm not sure what other info I should be giving ; I'll do my best to answer whatever is needed.

Azure Internal Load Balancer not Working

Posted: 14 May 2022 02:00 AM PDT

I'm trying to configure an Azure Internal Load Balancer, I have created in Basic SKU and Standard SKU. I want to use it with a SQL Server VM (TCP 1433), but is not working, when I test it with tcping to the front-end IP and port 1433 does'n respond. I have check the health probe, originally I created to test to 1433 TCP port, but latter change it to TCP 3389, also TCP 445, but it does'nt work eather.

I have tested the load balancer from a VM on the same subnet that the Load balancer is on, and also from my onpremise network (via VPN).

I have checked the NSG and everything looks good, I have created an incoming rule to allow "Azure Load Balancers" access to my Vnet, and also an Outgoing Rule to allow any traffic from my Vnet to "Azure Load Balancer", but it does'n work.

Also, I disabled Windows Firewall on the backend server.

Is there a way to check the result from the Health probe? Is there anything else I can check?

Regards.

rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]

Posted: 14 May 2022 12:37 AM PDT

Recently I have been unable to rsync over ssh. Each time I get the same error

bash: rsync: command not found  rsync: connection unexpectedly closed (0 bytes received so far) [sender]  rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]  

I am running

sudo rsync -av /var/www/html/somedir/ myuser@999.999.99.9:Users/myuser/Desktop/ec2backup  

Please note the username, IP, and directories have been changed for the purposes of this post.

In the past I have ran the exact same command as verified using bash_history.

What I have tried:

  1. Ran a similar command from another server, resulting in the same error message.
  2. Tested rsync locally (local dir to local dir), which worked perfectly.

The only thing that has changed is I've recently installed Virtualbox and Vagrant. Is it possible I may have messed up authentication/ports/etc on my local machine?

Any help is greatly appreciated.

Ansible vmware_guest_facts use facts for multple vm's

Posted: 14 May 2022 03:01 AM PDT

I am writing a script where I deploy and configure VM's on a vSphere environment.

After the deployment I want to gather the IP's of the VM's for DNS registration. Facts can be gathered for 2 or more VM's at the same time. But how do I then use the gathered data to output a VM name with IP address?

For a single VM this works to get the IP, but when used with 2 VM's the variable is undefinied:

- debug:      var: vm_guest_facts.instance.ipv4  

Maybe my approach is wrong, but I am not really sure how else to do it.

- name: Gather facts from recently deployed VM's      vmware_guest_facts:      validate_certs: False      hostname: "{{ vcenter_hostname }}"      username: "{{ vcenter_user }}"      password: "{{ vcenter_pass }}"      datacenter: "{{ datacenter }}"      name: "{{ item.key }}"    register: vm_guest_facts    with_dict: "{{ vmdetails }}"    - debug:      var: vm_guest_facts  

Results (the hostname and folder is Terraform, but this example only uses Ansible):

TASK [Gather facts from standalone ESXi server having datacenter as 'ha-   datacenter']     ok: [terraform.rum.local] => (item={'value': {u'mem': 512, u'network': u'T1-   PRD', u'datastore': u'nfs-b', u'cpu': 1, u'vmfolder': u'terraform-deploy'},   'key': u'testvm4'})  ok: [terraform.rum.local] => (item={'value': {u'mem': 756, u'network': u'T2-   TEST', u'datastore': u'nfs-a', u'cpu': 2, u'vmfolder': u'terraform-deploy'},   'key': u'testvm3'})    TASK [debug]  *************************************  ok: [terraform.rum.local] => {  "vm_guest_facts": {      "changed": false,      "msg": "All items completed",      "results": [          {              "_ansible_ignore_errors": null,              "_ansible_item_result": true,              "_ansible_no_log": false,              "_ansible_parsed": true,              "changed": false,              "failed": false,              "instance": {                  "annotation": "",                  "current_snapshot": null,                  "customvalues": {},                  "guest_consolidation_needed": false,                  "guest_question": null,                  "guest_tools_status": "guestToolsRunning",                  "guest_tools_version": "10304",                  "hw_cores_per_socket": 1,                  "hw_datastores": [                      "nfs-b",                      "nfs-a"                  ],                  "hw_esxi_host": "esx-a.rum.local",                  "hw_eth0": {                      "addresstype": "assigned",                      "ipaddresses": [                          "192.168.1.12",                          "fe80::250:56ff:feb8:d51c"                      ],                      "label": "Network adapter 1",                      "macaddress": "00:50:56:b8:d5:1c",                      "macaddress_dash": "00-50-56-b8-d5-1c",                      "summary": "DVSwitch: 50 38 43 04 bb 97 81 76-81 51 a6 cd a4 39 2b 61"                  },                  "hw_files": [                      "[nfs-b] testvm4/testvm4.vmx",                      "[nfs-b] testvm4/testvm4.nvram",                      "[nfs-b] testvm4/testvm4.vmsd",                      "[nfs-b] testvm4/testvm4.vmxf",                      "[nfs-b] testvm4/testvm4.vmdk"                  ],                  "hw_folder": "/datacenter1/vm/terraform-deploy",                  "hw_guest_full_name": "CentOS 7 (64-bit)",                  "hw_guest_ha_state": null,                  "hw_guest_id": "centos7_64Guest",                  "hw_interfaces": [                      "eth0"                  ],                  "hw_is_template": false,                  "hw_memtotal_mb": 512,                  "hw_name": "testvm4",                  "hw_power_status": "poweredOn",                  "hw_processor_count": 1,                  "hw_product_uuid": "42387ae9-cac5-1faa-1e84-0859533dd2b0",                  "instance_uuid": "5038a877-e2db-1bfd-6439-78f6522a9049",                  "ipv4": "192.168.1.12",                  "ipv6": "fe80::250:56ff:feb8:d51c",                  "module_hw": true,                  "snapshots": []              },              "invocation": {                  "module_args": {                      "datacenter": "datacenter1",                      "folder": "/vm",                      "hostname": "vcenter.rum.local",                      "name": "testvm4",                      "name_match": "first",                      "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",                      "port": 443,                      "username": "administrator@vsphere.local",                      "uuid": null,                      "validate_certs": false                  }              },              "item": {                  "key": "testvm4",                  "value": {                      "cpu": 1,                      "datastore": "nfs-b",                      "mem": 512,                      "network": "T1-PRD",                      "vmfolder": "terraform-deploy"                  }              }          },          {              "_ansible_ignore_errors": null,              "_ansible_item_result": true,              "_ansible_no_log": false,              "_ansible_parsed": true,              "changed": false,              "failed": false,              "instance": {                  "annotation": "",                  "current_snapshot": null,                  "customvalues": {},                  "guest_consolidation_needed": false,                  "guest_question": null,                  "guest_tools_status": "guestToolsRunning",                  "guest_tools_version": "10304",                  "hw_cores_per_socket": 1,                  "hw_datastores": [                      "nfs-a"                  ],                  "hw_esxi_host": "esx-a.rum.local",                  "hw_eth0": {                      "addresstype": "assigned",                      "ipaddresses": [                          "192.168.1.16",                          "fe80::250:56ff:feb8:5e2c"                      ],                      "label": "Network adapter 1",                      "macaddress": "00:50:56:b8:5e:2c",                      "macaddress_dash": "00-50-56-b8-5e-2c",                      "summary": "DVSwitch: 50 38 43 04 bb 97 81 76-81 51 a6 cd a4 39 2b 61"                  },                  "hw_files": [                      "[nfs-a] testvm3/testvm3.vmx",                      "[nfs-a] testvm3/testvm3.nvram",                      "[nfs-a] testvm3/testvm3.vmsd",                      "[nfs-a] testvm3/testvm3.vmxf",                      "[nfs-a] testvm3/testvm3.vmdk"                  ],                  "hw_folder": "/datacenter1/vm/terraform-deploy",                  "hw_guest_full_name": "CentOS 7 (64-bit)",                  "hw_guest_ha_state": null,                  "hw_guest_id": "centos7_64Guest",                  "hw_interfaces": [                      "eth0"                  ],                  "hw_is_template": false,                  "hw_memtotal_mb": 756,                  "hw_name": "testvm3",                  "hw_power_status": "poweredOn",                  "hw_processor_count": 2,                  "hw_product_uuid": "4238b6c3-a81a-cb51-a816-b83627bfcab0",                  "instance_uuid": "5038a0b1-f75a-f8cb-a872-344afdb1bc6f",                  "ipv4": "192.168.1.16",                  "ipv6": "fe80::250:56ff:feb8:5e2c",                  "module_hw": true,                  "snapshots": []              },              "invocation": {                  "module_args": {                      "datacenter": "datacenter1",                      "folder": "/vm",                      "hostname": "vcenter.rum.local",                      "name": "testvm3",                      "name_match": "first",                      "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",                      "port": 443,                      "username": "administrator@vsphere.local",                      "uuid": null,                      "validate_certs": false                  }              },              "item": {                  "key": "testvm3",                  "value": {                      "cpu": 2,                      "datastore": "nfs-a",                      "mem": 756,                      "network": "T2-TEST",                      "vmfolder": "terraform-deploy"                  }              }          }      ]    }  }  

What Roles and Features are installed by default on Windows Server 2012 R2?

Posted: 13 May 2022 11:04 PM PDT

Is there somewhere that I can get a list of the default Roles and Features that are installed on a Windows Server 2012 R2 server?

add package to WinPE using DISM failes

Posted: 14 May 2022 12:00 AM PDT

I'm having trouble with adding a package to a custom WinPe file.

I try to add a package (using: dism /image:c:\temp\mount /Add-Package /PackagePath:"C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab" in a command prompt with administrative privileges) I get this message:

An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003 An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003

Error: 3

An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003

When I look in the dism.log I see this:

Incorrect parameter C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab - path not found -

However, I checked the path and there is no error in it. Also in the dism.log there is this error:

DISM DISM Package Manager: PID=3564 TID=4204 Failed to get the underlying CBS package. - CDISMPackageManager::OpenPackageByPath(hr:0x80070003)

I have no clue of what that is.

Can somebody help me with adding packages to a WinPe custom wim image?

Thanks in advance.

Jack

Mysql - How to log access from not granted hosts

Posted: 14 May 2022 01:23 AM PDT

I have the following scenario:

MySQL server:  IP: 192.168.0.1  user: testing  pass: testing123123    MySQL client #1:  IP: 192.168.0.2    MySQL client #2:  IP: 192.168.0.3  

Into MySQL server:

GRANT ALL PRIVILEGES ON *.* TO testing@'192.168.0.2' IDENTIFIED BY 'testing123123';  flush privileges;  

Client #1 is granted and Client #2 is NOT granted.

Then in Client #1 shell:

mysql -h192.168.0.1 -uuser_bla_bla -pbla_bla_bla  ERROR 1045 (28000): Access denied for user 'user_bla_bla'@'192.168.0.2' (using password: YES)  

Into Mysql Server log:

2017-03-11 12:13:10 82588 [Warning] Access denied for user 'user_bla_bla'@'192.168.0.2' (using password: YES)  

Everything is OK: wrong username/password >> access denied >> log recorded

Now Client #2 shell:

mysql -h192.168.0.1 -uuser_bla_bla -pbla_bla_bla  ERROR 1130 (HY000): Host '192.168.0.3' is not allowed to connect to this MySQL server  

Into Mysql Server log: NOTHING!

My my.cnf:

[mysqld]  log_warnings = 2  log_error=/var/log/mysql_error.log  

MySQL log is not logging "host is not allowed", it only logs "Access denied for user".

QUESTION: How to log MySQL "host is not allowed" cases?

Thanks!

NGINX subdomain with proxy_pass

Posted: 14 May 2022 12:00 AM PDT

I have nginx running as a reverse proxy for a nextcloud server hosted on apache on a different virtual machine. I'd like to be able to access it via cloud.example.com. With my current rules I have to put in cloud.example.com/nextcloud. I have googled, searched, and the closest I got was being able to go to cloud.example.com and it would redirect to cloud.example.com/nextcloud, but I'd like to keep the /nextcloud out of the address bar if possible. Do I need to have a /nextcloud location that does the proxy pass in addition to the /?

This is my current nginx.conf:

server {      listen       443 ssl http2 default_server;      server_name  _;      ssl_certificate /etc/letsencrypt/live/cloud.domain.com/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/cloud.domain.com/privkey.pem;      ssl_stapling on;      ssl_stapling_verify on;        location /.well-known {          alias /var/www/.well-known;      }      location / {          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-By $server_addr:$server_port;          proxy_set_header Host $http_host;          proxy_pass http://10.37.70.6:8080;      }    }  

WSS Load Balancing with SSL Termination at layer 4

Posted: 14 May 2022 02:00 AM PDT

Should it be possible to terminate SSL for wss (secure websockets) at a layer 4 load balancer?

Seems to me that wss (and ws) in general would require TCP routing since an HTTP reverse proxy wouldn't be able to make sense of the packets; and, SSL termination would require layer 7 routing since the session is really maintained above layer 4. I feel somewhat confident about the first statement, and much less so about the second.

Bonus question. If it is possible, in general, to achieve wss routing and ssl termination in a single load balancer, can it be done specifically with HAProxy? Nginx? Other?

Access OpenVPN connection from local network through WAN IP?

Posted: 14 May 2022 05:01 AM PDT

I have 2 machines at home, one is a pine64 running a Debian linux and a desktop PC with windows 8.

I successfully installed openVPN server to the pine64 so I have a working setup, the openVPN service is accessible from the local network through the local IP address of the server, I tested the connection with my desktop PC.

The VPN is also working from the outside network through my router's WAN IP address, consequently the port 1994 is forwarded correctly to the openVPN host.

I also tested the connection from the outside network access with my cellphone (mobile network) and the openVPN connect client, everything went fine.

I would like to simulate/test the VPN access as it was an outside network from my desktop PC. For example I want to check whether I could access my other other hosts in the network through SSH if I will be far away from my home network.

What I don't quite understand is why I cannot access my VPN server from the local network through the router's public WAN IP.

The 2 machines have static IPs on the same network:

desktop PC: 192.168.1.11

pine 64 (openVPN server): 192.168.1.20

let the router's public WAN IP be (for the sake of the example): 5.39.182.24

So I'm trying to access the openVPN server with the IP 5.39.182.24:1194, but unfortunately I am not able to. There's no firewall setup on the PC or any other application I aware of that could disallow the connection. Trying the same approach with my cellphone from the local network fails too, so it's proven that it is not strictly an issue of the desktop machine.

Here's the log I get from the openVPN client application

Mon Sep 12 20:31:08 2016 OpenVPN 2.3.12 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [IPv6] built on Aug 23 2016  Mon Sep 12 20:31:08 2016 Windows version 6.2 (Windows 8 or greater) 64bit  Mon Sep 12 20:31:08 2016 library versions: OpenSSL 1.0.1t  3 May 2016, LZO 2.09  Mon Sep 12 20:31:13 2016 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this  Mon Sep 12 20:31:13 2016 Control Channel Authentication: tls-auth using INLINE static key file  Mon Sep 12 20:31:13 2016 Attempting to establish TCP connection with [AF_INET]5.39.182.24:1194 [nonblock]  Mon Sep 12 20:31:23 2016 TCP: connect to [AF_INET]5.39.182.24:1194 failed, will try again in 5 seconds: Connection timed out (WSAETIMEDOUT)  Mon Sep 12 20:31:38 2016 TCP: connect to [AF_INET]5.39.182.24:1194 failed, will try again in 5 seconds: Connection timed out (WSAETIMEDOUT)  

Server side settings

openVPN config

root@pine64:/etc# cat /etc/openvpn/server.conf  local 192.168.1.20 # SWAP THIS NUMBER WITH YOUR RASPBERRY PI IP ADDRESS  dev tun  #proto udp #Some people prefer to use tcp. Don't change it if you don't know.  proto tcp  port 1194  ca /etc/openvpn/easy-rsa/keys/ca.crt  cert /etc/openvpn/easy-rsa/keys/pine64.crt # SWAP WITH YOUR CRT NAME  key /etc/openvpn/easy-rsa/keys/pine64.key # SWAP WITH YOUR KEY NAME  dh /etc/openvpn/easy-rsa/keys/dh2048.pem # If you changed to 2048, change that here!  server 10.8.0.0 255.255.255.0  # server and remote endpoints  ifconfig 10.8.0.1 10.8.0.2  # Add route to Client routing table for the OpenVPN Server  push "route 10.8.0.1 255.255.255.255"  # Add route to Client routing table for the OpenVPN Subnet  push "route 10.8.0.0 255.255.255.0"  # your local subnet  push "route 192.168.1.20 255.255.255.0" # SWAP THE IP NUMBER WITH YOUR RASPBERRY PI IP ADDRESS  # Set primary domain name server address to the SOHO Router  # If your router does not do DNS, you can use Google DNS 8.8.8.8  #push "dhcp-option DNS 192.168.2.1" # This should already match your router address and not need to be changed.  push "dhcp-option DNS 8.8.8.8" # This should already match your router address and not need to be changed.  # Override the Client default gateway by using 0.0.0.0/1 and  # 128.0.0.0/1 rather than 0.0.0.0/0. This has the benefit of  # overriding but not wiping out the original default gateway.  push "redirect-gateway def1"  client-to-client  duplicate-cn  keepalive 10 120  tls-auth /etc/openvpn/easy-rsa/keys/ta.key 0  cipher AES-128-CBC  comp-lzo  user nobody  group nogroup  persist-key  persist-tun  status /var/log/openvpn-status.log 20  log /var/log/openvpn.log  verb 1  

iptables

(exported the rules to a file with iptables-save)

root@pine64:/etc# cat /etc/iptables-firewall-rules.backup  # Generated by iptables-save v1.4.21 on Sun Sep 11 21:19:15 2016  *filter  :INPUT ACCEPT [16429:2363941]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [17426:8592638]  -A INPUT -i eth0 -p udp -m state --state NEW -m udp --dport 1194 -j ACCEPT  -A INPUT -i tun+ -j ACCEPT  -A FORWARD -i tun+ -j ACCEPT  -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT  -A OUTPUT -o tun+ -j ACCEPT  COMMIT  # Completed on Sun Sep 11 21:19:15 2016  # Generated by iptables-save v1.4.21 on Sun Sep 11 21:19:15 2016  *nat  :PREROUTING ACCEPT [1172:103090]  :INPUT ACCEPT [157:31732]  :OUTPUT ACCEPT [205:14166]  :POSTROUTING ACCEPT [205:14166]  -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to-source 192.168.1.20  -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE  -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to-source 192.168.1.20  COMMIT  # Completed on Sun Sep 11 21:19:15 2016  

Output of the route command

root@pine64:/etc# route  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  default         speedport.ip    0.0.0.0         UG    0      0        0 eth0  10.8.0.0        10.8.0.2        255.255.255.0   UG    0      0        0 tun0  10.8.0.2        *               255.255.255.255 UH    0      0        0 tun0  link-local      *               255.255.0.0     U     1000   0        0 eth0  192.168.1.0     *               255.255.255.0   U     0      0        0 eth0  

I hope someone could shed some light on this issue, I appreciate the help.

Virtual Host Forbidden after enabled SSL

Posted: 14 May 2022 12:55 AM PDT

I enabled SSL for my wamp64 server and it all works fine for http://localhost/ and https://localhost/.

But I didn't enable it to look at localhost - I need to activate for 1 of my virtual hosts:

<VirtualHost *:443>      DocumentRoot "D:/DEV/www/app/public/"      ServerName dev.app.com:443      ServerAdmin admin@localhost      ErrorLog "D:/wamp64/www/ssllogs/ssl_error.log"      TransferLog "D:/wamp64/www/ssllogs/ssl_access.log"      SSLEngine on      SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL      SSLCertificateFile "D:/wamp64/ssl.crt/server.crt"      SSLCertificateKeyFile "D:/wamp64/ssl.key/server.key"        <FilesMatch "\.(cgi|shtml|phtml|php)$">          SSLOptions +StdEnvVars      </FilesMatch>        <Directory "D:/DEV/www/app/public">          SSLOptions +StdEnvVars          Options Indexes FollowSymLinks Includes ExecCGI          AllowOverride All          Order deny,allow          Allow from all      </Directory>        BrowserMatch ".*MSIE.*" \      nokeepalive ssl-unclean-shutdown \      downgrade-1.0 force-response-1.0      CustomLog "D:/wamp64/www/ssllogs/ssl_request.log" \      "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"  </VirtualHost>  

And of course it still has this in httpd-vhosts.conf

<VirtualHost *:80>      ServerName dev.app.com      DocumentRoot d:/dev/www/app/public      <Directory  "d:/dev/www/app/public/">          Options Indexes FollowSymLinks MultiViews          AllowOverride All          Require local      </Directory>  </VirtualHost>  

Now, the http version works just fine, but https gives me:

Forbidden    You don't have permission to access / on this server.  Apache/2.4.17 (Win64) OpenSSL/1.0.2h PHP/5.6.16 Server at dev.app.com Port 443  

Any idea what's the problem?

Server 2008 R2 NIC in "Unidentified Network" state after connectivity loss is restored

Posted: 14 May 2022 01:03 AM PDT

I am having trouble with 2 servers, both with the same symptoms. When they are reconnected to the switch after losing connectivity, they stay in an "unidentified network" state. Only after cycling the selection of ipv6 in the NIC or rebooting does it then recognize the domain again and allow connection between the servers.

My temporary fix involved accessing the server via RDP, accessing the NIC settings, and either enabling or disabling IPv6. It doesn't matter if the NIC has IPv6 enabled or disabled - the problem occurs whichever way. I guess changing the IPv6 settings is more of just resetting the NIC than anything. Rebooting also gets the servers back up though takes longer than the IPv6 trick.

Right now all the servers are connected to the same switch, though we're having a problem with it where it still loses power during a generator test despite being connected to a UPS. This is a completely separate issue, but I just want to let you know WHY the servers lose network connectivity.

There are close to 10 servers and only these 2 servers seem to have the problem. They are a database and an app server that talk to eachother. They were both purchased and put in place at the same time. They both have Broadcom NIC teaming enabled, however only have a single cable connected to each leading to the switch. The same problem occured with 4 NIC's connected on each server.

While the NIC's are in an unidentified state, they are unable to ping other servers, I'm guessing because the state puts them in a firewall class that doesn't allow communication to other domain server, because it remains connected to the internet and can be accessed remotely.

The configured DNS server IP's are the same on each: 192.168.X.6, 192.168.X.9 - both internal ADDS servers.

Any idea why this is happening? Hopefully this is enough detail for you. Please let me know if you have any questions.

How do I make a connection private on Windows Server 2012 R2

Posted: 14 May 2022 05:40 AM PDT

After a restart of one of our servers (a Windows Server 2012 R2), all private connections become public and vice versa (this user had the same problem). Stuff like pinging and iSCSI stopped working, and after some investigation it turned out this was the cause.

The problem is that I don't know how to make them private again. Left-clicking the network icon in the tray shows the "modern" sidebar, but it only shows a list of connections, and right-clicking them doesn't show any options.

What could be the problem, and is there a way to change these settings? I have to make one of the connections public (Internet access), and two of them private (backbone).

Howto rewrite URLs with two nginx as reverse proxies to gunicorn/Django

Posted: 14 May 2022 03:01 AM PDT

I have a Django application, deployed with gunicorn on port 8000, on a VM with a backend nginx, port 80, on the same VM. The nginx config is:

    location / {              proxy_http_version 1.1;              proxy_set_header X-Real-IP $remote_addr;              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header Host $host;              proxy_pass http://localhost:8000/;              proxy_set_header X-Forwarded-Proto https;              proxy_set_header REMOTE_USER $remote_user;      }        location /static/ {      }  

On the frontend side, there is another nginx, port 443, translating the user visible URLs https://myserver.com/myapplication/ into the internal http://myvm/. The nginx config is:

    location /myapplication/ {              proxy_http_version 1.1;              proxy_set_header X-Real-IP $remote_addr;              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header Host $host;              proxy_pass http://myvm/;              proxy_redirect off;              proxy_set_header X-Forwarded-Proto https;              proxy_set_header REMOTE_USER $remote_user;      }  

While I can access any URL such as https://myserver.com/myapplication/ without problems, the links in the Django application are all missing the /myapplication/ path component. What's wrong with my nginx setups? Is it the fronted or the backend that is wrong?

KB2919355 updated not offered

Posted: 14 May 2022 01:03 AM PDT

On my Windows 8.1 machines, the so-called "April Update" from KB2919355 was installed automatically by Windows Update, as expected. However, on my 2012 R2 server, the update was not automatically installed, and Windows Update says "no updates are available".

I know that I can download and apply KB2919355 from the Microsoft Downloads center, but missing this update makes me worried that this machine may be missing other updates as well. The server is updating directly from Microsoft, not from WSUS, and there is nothing else that I know of which could be blocking the update. The machine does have the prerequisite update from KB2919442.

How can I find out why this update is missing? What can I do to make sure this doesn't continue to be an issue with other updates?

(I wish I had access to another 2012 R2 server to confirm whether this is an issue specific to this machine or not, but my other Windows servers are running 2008 R2 or 2012 original, so this update doesn't apply to them.)

TLS from Radius for Wifi is rejected by Win7

Posted: 14 May 2022 05:01 AM PDT

We do have the following Setup at our company

  • Synology RS812+ hosting LDAP, RADIUS, DNS (Version DSM 5.0-4458 Update 2)
  • 2*Cisco Wifi APs WAP561 (Firmware 1.0.3.4)
  • Cisco Router ISA500 (Firmware 1.2.19)

What we want to have is basically authenticate and authorization to the WiFi based on LDAP via RADIUS

We installed a certificate on the Synology which is issued by GlobalSign for the root domain example.com and nas.example.com (We used our wildcard cert here before, which the Synology showed as self signed, maybe the usage extensions were not there, so i bought another one)

I configured the APs (WPA2) to connect to the RADIUS (IP based) and the RADIUS to access the LDAP (same machine).

Basically everything works except that our Win7 (and some Vista) clients are having problems to do the TLS Handshake with the RADIUS

Unforunately the output is not very good, since it only shows

Auth 2014-04-15 10:01:49 Login incorrect (TLS Alert read:fatal:access denied): [max.mustermann@example.com/<via Auth-Type = EAP>] (from client CiscoHardware port 0 cli 00-26-82-ED-61-92)

Error 2014-04-15 10:01:49 TLS Alert read:fatal:access denied

My guess: The supplicant (Win7 machine) is not accepting the certificate which results in failing the authentication to work. If i uncheck the option "Check Server Certificate" everything works.

The problem must almost certainly be the certificate used in the Authentication since there are strong requirements to the certificate from Microsoft:

http://support.microsoft.com/kb/814394/en-us

I already checked the object identifier which is 1.3.6.1.5.5.7.3.1. and is present in the certificate

There are two other points i might not fully understand:

  • The name in the Subject line of the server certificate matches the name that is configured on the client for the connection.
  • For wireless clients, the Subject Alternative Name (SubjectAltName) extension contains the server's fully qualified domain name (FQDN).

There is one intermediate certificate which is present on the radius, the root cert (GloalSign) is trusted by the OS.

About the domain name: How does a client check this since it is connecting to a SSID and the AP points to a RADIUS Server by IP?

How can i debug this a bit further? I am working on a Win7 Machine, but linux is available if needed

Can connect to ubuntu server with PuTTY but can't via WinSCP

Posted: 14 May 2022 01:33 AM PDT

I have just updated from 8.04 to 10.04 after such a long time I am rather excited. But since the update I am now unable to login to my server via WinSCP but a connection with PuTTY is still completely fine.

Neither are using private keys. I am just entering a username and password each time.

I do however get through to the authentication panel, where i can enter my username and password. This is where it appears to time out.

So, is there a reason why one would accept a SSH connection and not the other?

MAC address allocation for channel-bonded interface

Posted: 13 May 2022 11:04 PM PDT

I've configured Channel-bonding (on RHEL/CentOS) with with balance-alb (mode=6) option:

BONDING_OPTS="mode=balance-alb miimon=100 updelay=200 downdelay=200"

which is working fine and according to the /proc/net/bonding/bond0, the active-slave is eth1.

[root@baba ~]# cat /proc/net/bonding/bond0  Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)    Bonding Mode: adaptive load balancing  Primary Slave: None  Currently Active Slave: eth1  MII Status: up  MII Polling Interval (ms): 100  Up Delay (ms): 200  Down Delay (ms): 200    Slave Interface: eth0  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 0  Permanent HW addr: 00:19:00:00:00:fb    Slave Interface: eth1  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 0  Permanent HW addr: 00:06:11:11:11:3b  

(I've replaced the middle bits of the MAC by 00 and 11 intentionally)

Now, according to the ifconfig, the MAC address allocation for eth0 and eth1 are different (from the above output) - they are switched.

[root@baba ~]# ifconfig | sed -n '/^[a-z]*[0-9]/p'  bond0     Link encap:Ethernet  HWaddr 00:19:00:00:00:FB    eth0      Link encap:Ethernet  HWaddr 00:06:11:11:11:3B    eth1      Link encap:Ethernet  HWaddr 00:19:00:00:00:FB    

Does any one know why I'm seeing this or how does it work? Thanks in advance. Cheers!!

Exchange 2010 send from multiple domains

Posted: 14 May 2022 04:00 AM PDT

We have a Windows 2008 Enterprise R2 SP1 server with multiple accepted domains configured on our Exchange 2010 console.

Configuration of exchange 2010: In exchange console, under organization configuration > hub transport > accepted domains, we have:

domain1 > authoritative > default = true  domain2 > authoritative > default = false  domain3 > authoritative > default = false  domain4 > authoritative > default = false  

We are able to RECEIVE e-mails on ALL the above domains.

Just to be clear: I can receive emails to userX@domain1.com , userX@domain2.com, userX@domain3.com and userX@domain4.com without any problems. I am able to send email from userX@domain1.com (the default domain). However, when trying to send emails from userX@domain2.com, userX@domain3.com, and userX@domain4.com, I receive the following error:

Delivery has failed to these recipients or groups:

destination_example_email You can't send a message on behalf of this user unless you have permission to do so. Please make sure you're sending on behalf of the correct sender, or request the necessary permission. If the problem continues, please contact your helpdesk.

If I change the primary email address for userX to userX@domain3.com , I am able to send as userX@domain3.com and only from that mail.

The question:

How can I enable sending emails from ALL the authoritative domains at any single moment without having to manually change the default email address of the user?

No comments:

Post a Comment