Sunday, April 11, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


I wasn't able to reconnect. After I resized the server specification

Posted: 11 Apr 2021 10:33 PM PDT

I wasn't able to connect After I resized the server specification.

I hope you can help me with this issue.

Thanks.

Type: Google Compute Engine : VM Instance

OS Windows Server

Would Adding NordVPN to a Windows Server Block Remote Access at Its IP?

Posted: 11 Apr 2021 10:28 PM PDT

I have a Windows Server with a Puppeteer app that scrapes various websites. Some of those sites have blocked the IP address, so I need to use something like a VPN so that I can change the IP address when that happens. I already have a Nord VPN account and would like to use that, but I have only used it on a desktop and am not sure what impact that would have on the ability to access my remote server at its IP address.

Would installing a VPN block external access to the server?

I only want to use the VPN for outgoing connections. I already installed NordVPN but it requires a restart and during the install I got this error https://support.nordvpn.com/Connectivity/Windows/1047410022/TAP-driver-error-when-connecting-to-a-VPN.htm and I fear restarting the machine might make it inaccessible via RDC at its IP address because if NordVPN starts running the connection it might act as a buffer between that IP and the rest of the internet.

In a recursive DNA query procedure, if a local DNS server needs to query root DNS servers, how does it know/get their IP addresses?

Posted: 11 Apr 2021 06:39 PM PDT

I am taking a computer networks class, and was wondering how a local DNS server knows the root DNS servers' IP addresses when querying them. I am assuming that since this is the root server, maybe there is a pre-provided root server address list for the local DNS, since a root server address can't be found from DNS servers from lower hierarchy, but I may be mistaken.

is the erratic behaviour and the following related? entering "www.mydomain" instead of "mydomain" sends me to my other site under same IP

Posted: 11 Apr 2021 07:21 PM PDT

The purpose of this question is to mention two things that I cant understand. I want to ask if they are related, in order to know if I should contact the domain vendor or not.

I currently have one IP and I'm using Nginx to serve two sites:

Erratic behaviour: Almost a week has passed since I set up the latter with the following configuration (GoDaddy conf., translated from Spanish). Nevertheless, sometimes entering "tictactoe-neural.net" in Chrome directs me to GoDaddy.com (the company that sold me the domain).

Another problem: Entering www.tictactoe-neural.net (instead of tictactoe-neural.net) directs me to ai-friendly.com; I dont understand if that is an Nginx directive, a matter of one domain having been set up before the other, something related to the mentioned domain vendor, or another thing.

I know I should contact the vendor about the erratic behaviour, but dont know if the second item ("another problem") should be mentioned too.

GoDaddy conf

Type    Name    Value   TTL Actions  A   @   67.205.140.247  1 hora  Modify  A   @   Parked  600 seconds Modify  CNAME   www @   1 hour  Modify  CNAME   _domainconnect  _domainconnect.gd.domaincontrol.com 1 hour  Modify  NS  @   ns01.domaincontrol.com  1 hour    NS  @   ns02.domaincontrol.com  1 hour    SOA @   Main server name: ns01.domaincontrol.com.   1 hour  

Nginx configuration file.

server {      server_name ai-friendly.com;      root /usr/share/nginx/html;      location / { try_files $uri @app; }      location @app {          include uwsgi_params;          uwsgi_pass flask:5000;          uwsgi_read_timeout 180;      }  }    server {      server_name tictactoe-neural.net;      root /app-tictactoe;  }  

Invalid response from .well-known/acme-challenge/<token>

Posted: 11 Apr 2021 04:50 PM PDT

I'm trying to use certbot to obtain an SSL certificate for one of my subdomains. However, one of the challenges fails when trying to test .well-known/acme-challenges/<token>. The web server (nginx) returns 404. The precise error is:

Obtaining a new certificate  Performing the following challenges:  http-01 challenge for foo.domain.com  http-01 challenge for www.foo.domain.com  Waiting for verification...  Cleaning up challenges  Failed authorization procedure. www.foo.domain.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.foo.domain.com/.well-known/acme-challenge/eXpa7Ub3slbohHh0AZZA-aACo70p15KkJS05aYsN2bY [my-ip-addr]: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>", foo.domain.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://foo.domain.com/.well-known/acme-challenge/WxYfL5t0vLNe7jiIF2TFz1sXyQBH3RcPIVz5de9lQ8M [my-ip-addr]: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>"    IMPORTANT NOTES:   - The following errors were reported by the server:       Domain: www.foo.domain.com     Type:   unauthorized     Detail: Invalid response from     http://www.foo.domain.com/.well-known/acme-challenge/eXpa7Ub3slbohHh0AZZA-aACo70p15KkJS05aYsN2bY     [my-ip-addr]:     "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body     bgcolor=\"white\">\r\n<center><h1>404 Not     Found</h1></center>\r\n<hr><center>"       Domain: foo.domain.com     Type:   unauthorized     Detail: Invalid response from     http://foo.domain.com/.well-known/acme-challenge/WxYfL5t0vLNe7jiIF2TFz1sXyQBH3RcPIVz5de9lQ8M     [my-ip-addr]:     "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body     bgcolor=\"white\">\r\n<center><h1>404 Not     Found</h1></center>\r\n<hr><center>"       To fix these errors, please make sure that your domain name was     entered correctly and the DNS A/AAAA record(s) for that domain     contain(s) the right IP address.    

I have added into my config file:

location ^~ '/.well-known/acme-challenge' {      allow all;  }  

But this does nothing. The fact that it is getting a 404 is whats throwing me off. If it was a problem of nginx not allowing access to the file, then wouldn't it throw a 403?

nginx -t shows no errors in my config. I have ensured that my DNS info is set up correctly.

Another thing that puzzles me is that I have 3 other subdomains running on this server, none of which I have had this problem with.

What is happening here, and how do I allow certbot to see this file, so I can get the certificate?

Pointing a custom domain to Azure Web App - Without the need of adding verification records

Posted: 11 Apr 2021 03:17 PM PDT

I have a website running on Azure Web App. This website provides a profile page to its users. The users are looking to point their custom domains to their respective profile page. I want to minimize the manual steps to achieve this. For every custom domain, I need to add that manually to Azure Web app and also need to verify the ownership via TXT record. This could be fine for a small number of custom domains but when you have 100s of such users, it just becomes a blocker.

Is there any way I could somehow let any custom domain pointed to my website work without needing to add the domain record on Azure portal and having to verify ownership?

I wonder if Azure DNS can help me achieve my goal in anyway.

Nginx proxy_store can't write on a user directory tree

Posted: 11 Apr 2021 05:37 PM PDT

CentOS 7 server used for shared hosting. No chroot. 99% WordPress installs.

Every user gets a /home/someuser skeleton including ~/web where all web-accessible files reside. All dirs below and including web are chmod-ed 0750, all files are 0640. Every user gets a php-fpm instance running as someuser:someuser. Nginx user nginx is added to the someuser group on creation. Files and dirs are owned by someuser:someuser. PHP/WordPress are happy with this, nginx doesn't have any problem serving stuff. Many years working fine.

Now I have a "dirty" (as in messy) image bank that I don't want to just copy over to the web tree. My plan is to set up an internal nginx server{} with that dir as root to serve those images on demand and I want the main nginx server{} to use proxy_store to save only the requested images on the web tree.

I can't get nginx to write under ~/web. I tried chmod-ing everything from ~/web on to 0760 to no avail. I also tried recreating the directory structure in the target dir, but it still doesn't write the files.

Should I relax permissions further up in the directory chain? I don't like the idea that much. Is there something I'm missing? I have it working in other setups where the nginx user is the owner of the tree where it writes.

Ex:

server {     listen 8080;     server_name blah.com;     location / {        root /home/someuser/messydir;     }  }  server {     root /home/someuser/web;  # lots of lines     location /images {        error_page 404 = @fetch;        expires 7d;     }     location @fetch {        internal;        proxy_set_header Host blah.com;        proxy_pass         http://localhost:8080;        proxy_store        on;        proxy_store_access user:rw group:r;        root "/home/someuser/web/images";     }  }  

Mailserver hosted on 1 ISP; how to send email through 2nd server on 2nd remote ISP? (ISP 1 blocks Port 25 outbound; ISP 2 does not)

Posted: 11 Apr 2021 04:24 PM PDT

TL;DR I want to setup something like this. How can I do it?

Two ISPs.
The first to receive emails, host the main mailserver, manage mailboxes, etc.
The second ONLY to send emails using Port 25.
Two separate servers in separate locations. How can I set it up and connect the two?

I'm running a mailserver at home, but I recently switched ISPs, which killed my ability to send email (receiving works fine)
My new ISP blocks outgoing Port 25. (Incoming 25 is open, all other ports in+out are open)
I also have access to a remote business internet connection with all ports open (incl. port 25 out)

Quick diagram here (same as tldr)

I want to keep the main mailserver at my home, (mail.example.com) but setup a barebones remote server just to send email (outbound.example.com; to communicate over port 25) and setup a secure connection between the two servers.

two thoughts that pop into my mind are:

  1. Setup a VPN and route all outbound port 25 connections from main server to remote server
  2. Setup a minimal SMTP server on the remote machine, that communicates back home and doesn't store anything itself

Are these possible? How would I do it? Any other way to achieve sending emails?

My current setup is Ubuntu server 20, running Mailcow in an LXC container, but I am more than willing to change my setup (anything without Docker is appreciated). My first idea is just networking so it would be software-independent, while my second would require the SMTP server to communicate with the mothership somehow.

Finally, I want the absolute bare minimum strain on the remote business ISP connection. It is slower and farther away than my home internet, and I have limited access to it (except through SSH). So I don't want to move my entire mailserver there.

Any help would be appreciated!

TL;DR I want to setup something like this. How can I do it?

nginx under high traffic: network goes down when log writes to disk?

Posted: 11 Apr 2021 03:58 PM PDT

An VPS with 2 vCPUs, with Ubuntu 20.04 and nginx.

Nothing changed regarding to loging: neither on nginx, rsyslogd, or journald.

I launch ab (apache-benchmark) from a nearby VPS, like this:

ab -k -c 300 -n 3000000 'https://example.com'  

Then, in the provider graphs, I can see how the network goes down (throughput and packets per second) while the disk write increases. This happens at intervals of each 30 seconds.

The disk writes increase in throughput, but the disk iops stay low, 1 or 2 IOPS during all the benchmark, there is nothing else in the system, but my SSH in the internal interface, with a tail -f of the nginx logs.

So I suspect maybe it's the way that nginx is writing the log to disk, or, maybe the default sysctl, and the way the kernel is syncing the changes to disk (?)

I don't see too many sysctl settings at 30 seconds:

# sysctl -a | grep '30$'  kernel.acct = 4 2       30  net.core.xfrm_acq_expires = 30  net.ipv4.ipfrag_time = 30  net.ipv4.neigh.default.gc_interval = 30  net.ipv6.neigh.default.gc_interval = 30  net.ipv6.route.gc_interval = 30  vm.max_map_count = 65530  

But there is this at 3000 centisecs:

# sysctl -a | grep '3000$'  vm.dirty_expire_centisecs = 3000  

Could be that one?

dirty_expire_centisecs This tunable is used to define when dirty data is old enough to be eligible for writeout by the kernel flusher threads. It is expressed in 100'ths of a second. Data which has been dirty in-memory for longer than this interval will be written out next time a flusher thread wakes up.

What I'm worried, is about the traffic going from 7K pps to zero each 30s, and coming back when the disk write is done.

What can be done to avoid that behavior?

Here is an image of the graphs, that shows the issue as described: VPS performance graphs

Edit: sysctl findings

Update

It's not related to the nginx log.

By the @berndbausch indications, did look at the client side, and there are the same graphs of the network going down.

Repeating the bench with:

access_log /var/log/nginx/access.log combined buffer=64K flush=5s;  

And:

sysctl -w vm.dirty_expire_centisecs=500  

The disk IOPs increase to from 1 to +/- 10, the disk througput graph makes peaks each 5 seconds, but the network graphs still do the same "down to 0" in 30 seconds intervals, both in the server and the client.

More interseting, repetaing the benchmark with:

access_log off;  

The disk graphs stay at 0, but the network graphs do the same.

In this image, both benchmarks can be seen as described, let side with flush each 5s, and right side with no access log:

enter image description here

Update 2

Performing iperf dual test on port 443... the server graph is plain at 1 Gbps, but, the iperf client has te same behavior, network out graphs go down to 0 each 30 seconds.

Will try with a different client, or tune a litle bit the client OS, limits and sysctl, let's see.

Update 3

This looks like a monitoring bug in the control panel.

Did repeat the benchmarks from other VPS as client, and from a dedicated server (bare metal), always the same graphs...

But, if I launch bmon in both sides during the tests... it looks plain:

enter image description here

The same in the receiver than in the sender. 10 Gbps between two VPS and 1 Gbps from the dedicated server to the VPS. Always plain with 1 second resolution.

So... mistery solved.

Update exchanges user's alias without changing others field

Posted: 11 Apr 2021 10:55 PM PDT

If I update my exchanges user's alias under the general tab in my exchange admin center. I found out it will automatically updated the email address as well under the general tab and the SMTP address under the email address tab as well.It also updated the proxyAddress attribute in AD since its linked to the exchanges email address(SMTP address). May I know whether this is the default behavior of how the exchange server works ?

As I know so far, the exchange server will use alias for searching for the correct email address,so if you update the alias field in exchanges server for that particular of user, email address(SMTP) in exchanges and proxyAddress attribute in AD will updated automatically.

So may i know if there any method that i can update the alias only without changing any others value in exchange and AD ? I am not sure whether is there any method on this because I saw that the alias field in exchanges admin center is a mandatory field.

Setting up a stand alone web access point

Posted: 11 Apr 2021 03:34 PM PDT

I have an openwrt router that is configured to offer no internet just a local web portal. I have configured the firewall to forward all HTTP requests to the router's web server and I have configured dnsmasq to return the routers IP for all dns requests but the captive portal is not working on Android. I even tried setting up dnsmasq to return no IP for connectivitycheck.gstatic.com but the captive portal is still not working.

kex_exchange_identification: Connection closed by remote host

Posted: 11 Apr 2021 03:38 PM PDT

Trying to connect to web servers running on Centos 7 via jump server, earlier this connection used to work fine without any problems, but not sure now what went wrong.

Following is the status

$ ssh -vvv abc@JUMP_SERVER_IP -J 10.10.0.5 -i .ssh/id_rsa_iit  OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f  31 Mar 2020  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files  debug1: /etc/ssh/ssh_config line 21: Applying options for *  debug2: resolve_canonicalize: hostname JUMP_SERVER_IP is address  debug1: Setting implicit ProxyCommand from ProxyJump: ssh -vvv -W '[%h]:%p' 10.10.0.5  debug1: Executing proxy command: exec ssh -vvv -W '[JUMP_SERVER_IP]:22' 10.10.0.5  debug1: identity file .ssh/id_rsa_iit type 0  debug1: identity file .ssh/id_rsa_iit-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.2  OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f  31 Mar 2020  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files  debug1: /etc/ssh/ssh_config line 21: Applying options for *  debug2: resolve_canonicalize: hostname 10.10.0.5 is address  debug2: ssh_connect_direct  debug1: Connecting to 10.10.0.5 [10.10.0.5] port 22.  debug1: connect to address 10.10.0.5 port 22: Connection timed out  ssh: connect to host 10.10.0.5 port 22: Connection timed out  kex_exchange_identification: Connection closed by remote host  

ESXi 6.7 web UI, what is CPU "Package 0" in the monitor page?

Posted: 11 Apr 2021 08:53 PM PDT

I have an ESXi 6.7 lab, with i9-10900. From the Web UI Host - Monitor - Performance tab page, what is CPU "Package 0"? The value isn't the same as host CPU usage%. Googled, but I can't find what it is. It seem to be the reading of the Socket 0 CPU usage.. (I don't have a dual CPU server to check) Then, what is the host CPU reading for? The load of the hypervisor? Any link to an official doc?

Bind: query (cache) './ANY/IN' denied - is it a DDos attack?

Posted: 11 Apr 2021 03:49 PM PDT

My syslog is getting floated with messages like

Jan 12 11:09:25 xxx named[902]: client 74.74.75.74#47561 (.): query (cache) './ANY/IN' denied  Jan 12 11:09:25 xxx named[902]: client 74.74.75.74#47561 (.): query (cache) './ANY/IN' denied  Jan 12 11:09:25 xxx named[902]: client 74.74.75.74#47561 (.): query (cache) './ANY/IN' denied  Jan 12 11:09:25 xxx named[902]: client 74.74.75.74#47561 (.): query (cache) './ANY/IN' denied  Jan 12 11:09:25 xxx named[902]: client 74.74.75.74#47561 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:19 xxx named[902]: client 68.12.225.198#58807 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:19 xxx named[902]: client 68.12.225.198#58807 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:19 xxx named[902]: client 68.12.225.198#58807 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:19 xxx named[902]: client 68.12.225.198#58807 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:19 xxx named[902]: client 68.12.225.198#58807 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:26 xxx named[902]: client 68.12.225.198#9414 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:26 xxx named[902]: client 68.12.225.198#9414 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:26 xxx named[902]: client 68.12.225.198#9414 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:26 xxx named[902]: client 68.12.225.198#9414 (.): query (cache) './ANY/IN' denied  Jan 12 11:11:26 xxx named[902]: client 68.12.225.198#9414 (.): query (cache) './ANY/IN' denied  

and i don't know if this is a DDoS attack or just strange behaviour of bind.

So i set up a simple fail2ban jail that blocks IPs that produce more than 20 such errors in 24h. After the weekend i checked and was astonished: More than 1000 IPs have been blocked. Including famous ones like 1.1.1.1. So this can not be right.

My server is a Debian 9 managed via Plesk Obsidian. I have no special configuration done to bind9/named (as far as i know). It is the primary ns server for all my domains.

So the question is: What can i do to protect my server against such a flood of dns queries or should i just ignore them.

Windows 2012 Server, Update error 80072EFE

Posted: 11 Apr 2021 06:08 PM PDT

A new installation of Windows 2012 as a guest inside HyperV 2019. When trying to update I get the 80072EFE error.

This error indicates a network timeout but internet is working OK. I've ruled out antivirus, firewall, router/gateway filtering, incorrectly configured time/date etc.

When looking at the network traffic generated during unsuccessful update attempt I see a successful TCP handshake between the server and microsoft's update server 40.70.224.149. However after the handshake the Windows 2012 server sends a Client hello packet and the microsoft's server answers with a RST and ends the connection.

This happens a few times and then I get an error 80072EFE.

I have two more Windows 2012 servers (for a lab) installed from scratch and the same thing happens on them as well.

Any ideas ?

Vlan manager does not automatically disable macs

Posted: 11 Apr 2021 08:07 PM PDT

I want to implement a client infrastructure where the devices connect to the network in different vlan.

I installed a freeradius server connected to our Active Directory. I have enabled the switches for dynamic vlan and assigned all the vlan to LDAP groups which in turn enable the authentication of mac addresses through radius policies.

Everything works correctly, manually creating mac address users in Active Directory that represent our network cards.

Since the clients that have to stay on the various vlans are dynamic based on the title attribute of a user connected to this device, I installed this server application (vmam), which would automatically manage the various mac-addresses based on the correct configuration.

Wow, it works correctly as I hoped, but ... as far as I understand, it should also manage the disabling of the various mac-addresses and with my current configuration it does not work.

This is my configuration:

LDAP:    add_group_type:    - user    bind_pwd: password    bind_user: test\admin    computer_base_dn: OU=Computers,OU=My,DC=test,DC=com    domain: test.com    mac_user_base_dn: OU=MAC,DC=test,DC=com    match: like    max_computer_sync: 0    mac_user_ttl: 30d # This is a TTL for mac-address than would disabled    other_group:    - ALL_MAC    servers:    - dc1    - dc2    ssl: false    time_computer_sync: 1m    tls: true    user_base_dn: OU=My,DC=test,DC=com    verify_attrib:    - title    write_attrib:  VMAM:    filter_exclude:    - TAP    - VirtualBox    - disconnect    log: /usr/log/vmam.log    remove_process: false    automatic_process_wait: 3    mac_format: none    soft_deletion: true   # This would disabling mac-address    user_match_id:      Manager: 200      Developer: 210      Office: 220      Customer: 230    vlan_group_id:      200: VLAN_Manager      210: VLAN_Developer      220: VLAN_Office      230: VLAN_Customer    winrm_pwd: password    winrm_user: test\admin  

Anyone know why it doesn't work? Have you ever used this software? Everything works great, it seems to me a real vlan manager, but I don't know how to activate the disables.

As work around it can be used as a python module and I could make a script, but I don't know how to use python.

resizing partition using multipathd

Posted: 11 Apr 2021 05:01 PM PDT

redhat 6.3 with a multipath xfs partition.

i have already increased the LUN and need to reflect the increase in the filesystem. using xfs_growfs will not work yet unless i increased the partition size. since it's a multipath, i found there is this command to do that named "multipathd", the command to be used is

multipathd resize map multipath_device

for those who have already done it, is this command destructive or not? i'd like to run it on an online filesystem (backup is done).

Receiving mail from Gmail delayed - PTR issue?

Posted: 11 Apr 2021 10:05 PM PDT

One of our partners receives our mails with noticeable delays. The same mail sent to two addresses under their domain are sometimes delivered at their server hours apart (checked in the actual server logs, not just the user mailboxes). I suspect a mismatch in the reverse DNS setup is causing this issue, but I'm not sure that would result in these errors.

We are using G Suite (Google Apps for Business), they are using Exchange on their own premises (not sure what version). They have two internet connections at their office, and the Exchange server is reachable on both IP addresses (so from the outside I can telnet 1.1.1.1 on port 25 and 2.2.2.2 on port 25 and get the same responses).

Let's say the domain is example.com. The MX record points to mail.example.com, and mail.example.com resolves to 1.1.1.1 and 2.2.2.2. 1.1.1.1 is under their control, the PTR record for 1.1.1.1 resolves to mail.example.com. The 2.2.2.2 address is not under their control, the PTR record points to 2-2-2-2.static.their-isp.com. The SMTP mail server has a banner of mail.example.com.

I am mentioning these PTR records because tools like MXToolBox mention this SMTP header mismatch, but after reading similar questions here it's not clear to me whether that only applies to sending mail from that domain (and spam filters on the receiving side), or also receiving mail there.

In the past their DNS setup was different: they has two MX records, pointing to mail.example.com and mail2.example.com, with mail.example.com resolving to 1.1.1.1 and mail2.example.com resolving to 2.2.2.2. The SMTP banner was still just mail.example.com. Many mails were delayed but still received after a while. For one mail I got the following warning from Google:

Technical details of temporary failure: The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720 [mail.example.com. 1.1.1.1: timed out] [mail2.example.com. 2.2.2.2: unable to read banner]

I interpreted this as meaning that the connection via 1.1.1.1 was down, the connection via 2.2.2.2 was up, but Gmail refused to deliver the message because the SMTP banner (mail.example.com) did not match either the PTR record of 2.2.2.2 (2-2-2-2.static.their-isp.com) or the DNS record used to find 2.2.2.2 (mail2.celds.com). After I mentioned this to them, they changed to the setup mentioned above.

But today I compared this to the MX setup of G Suite, and their setup is similar: - MX record: ASPMX.L.GOOGLE.COM - which resolves to 209.85.202.27 - which reverses to dg-in-f27.1e100.net - SMTP banner is mx.google.com

MXToolBox also mentions this SMTP Banner Check as a possible issue, but I assume Google knows how to configure their servers :-)

So, what I want to know: can any of the settings above cause the issues we see: Google only being able to deliver some messages to their servers after a big delay? Or are there other obvious places where we should be looking?

nginx proxy cache mp4 streaming

Posted: 11 Apr 2021 09:51 PM PDT

Sorry for my question, the schema like this: there are upstream which is a IIS server where locates video files. my nginx is an proxy caching server, I need to cache mp4 file when client starts playing it in his browser and send/stream it to client. if index of mp4 file locates at the beginning of file, then its ok, it works good. but if index of file locates at the end of mp4 file then I have problems I am looking up to cache and see that nginx caching from upstream file till the end and deliting it and for next section of file it caching it again fully sending section and delete cache... I do not understand why :( also it send many error headers as incorrect length in this case player stops :(

(RAM Cache definitions)

1 level server defs

server {      listen        front.network;       server_name   .mybox.com;        add_header 'Access-Control-Allow-Origin' '*';      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';      add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';        max_ranges 1024;        proxy_cache ssd;      proxy_cache_valid 200 600s;      proxy_cache_lock on;      proxy_read_timeout 10m;      proxy_send_timeout 10m;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header X-Real-IP $remote_addr;      add_header Accept-Ranges bytes;      proxy_cache_min_uses 1;      proxy_force_ranges on;        proxy_cache_key   $uri$is_args$args;        # Immediately forward requests to the origin if we are filling the cache      proxy_cache_lock_timeout 0s;        # Set the 'age' to a value larger than the expected fill time      proxy_cache_lock_age 200s;      proxy_cache_valid 200 206 301 302 48h ;        proxy_cache_use_stale updating;        location /5 {          proxy_set_header Host $redirect_host;          proxy_pass http://$redirect_upstream;      }  }  

Issue with GPOs for home folder creation and drive maps

Posted: 11 Apr 2021 10:05 PM PDT

I am following a guide I've seen recommended on here for setting up home folders and drive maps for users and I am running into an issue despite the fact that I set it up exactly as illustrated here:

http://alexcomputerbubble.com/using-group-policy-preferences-gpp-to-map-user-home-drive/

I checked the event viewer during the initial logon and even though the folder gets created on the server I see an error 4098 (the group policy failed with the error code 0x80070037, the specified network resource or device is no longer available.)

After the 3rd logon the drive shows up correctly.

Looking at the comments on the blog it shows that some users have the same issue while others do not. I can't figure out why.

I would prefer to have the home folder created via group policy as opposed to the AD profile tab that way it's easier for the help desk to setup a new user.

OpenVPN Install: Can't access to Client UI page

Posted: 11 Apr 2021 09:07 PM PDT

After installing open vpn successfully, i tested by accessing to Client UI but it said ERR_CONNECTION_TIMED_OUT. Is there any ways to fix it? @Information: I'm running CentOS 7 on Amazon EC2 instance. I turned off selinux and checked if openvpn is running or not.

[root@ip-10-0-7-48 tmp]# netstat -ntlp | grep 'openvpn'  

tcp 0 0 10.0.7.48:443 0.0.0.0:* LISTEN 2023/openvpn-openss

site to site vpn between sonicwall and pfsense

Posted: 11 Apr 2021 05:01 PM PDT

The problem i am facing is establishment of a site to site VPN in between pfSense( version 2.0.1) and SonicWall Pro2040 Enhanced ( Firmware Version: SonicOS Enhanced 4.2.1.4-7e) . All of the configuration is done properly , still i got the following error in sonicwall -enter image description here

Phase 1 and 2 passes properly but problem with "Payload processing" i found that it could be for shared key mismatch but I double check , no mismatch with shared key in both firewall . It also shows in sonicwall that tunnel is active- enter image description here

The log from pfSense is below - enter image description here

In pfSense the tunnel shows inactive .

I am not too expert in firewall, so I will be grateful if will receive a proper guideline in this regard,

What alternatives exist to using TFTP in setup

Posted: 11 Apr 2021 09:07 PM PDT

I'm looking for a way to set up clients in a network and have used TFTP so far. Messing around with the server I was able to do a path traversal with something similar like GET asdf/../../../../windows/win.ini. For this and other security considerations I'd like to to switch to something more secure. As far as I know, setting up clients with PXE over the network always uses DHCP and TFTP to download the images. I've seen the possibility to run TFTP service in a chrooted environment or filter incoming traffic on port 69 to make it more secure. I'm not too fond of this, because I'm think there should be a better than deactivating the service or filtering traffic. Also it'd be nice to get away from TFTP completely. Are there any other alternatives under Windows?

Transparent redirect or proxy in Apache, preserving incoming request

Posted: 11 Apr 2021 06:08 PM PDT

When a user first hits our server, we want to capture some information about the incoming request: GET parameters, Referer header, &c. The general idea is that when we get an incoming request that matches some RewriteConds (doesn't have a cookie, doesn't have a particular GET param in case they don't accept cookies, &c.), we use a RewriteRule with [P] to transparently proxy the request to a servlet (actually a Spring controller, if that matters) that will analyse the incoming request, then send a 302, with a new cookie set, to redirect the user to the originally requested URL. That is,

  • User requests /foo.html
  • mod_rewrite detects that this is the user's first request (no cookie, no GET flag param)
  • A RewriteRule with [P] proxies to /my/spring/controller
  • Servlet analyses the request and responds with a 302 to Location: /foo.html.

The first three steps are simple enough. The problem is that in step four, the servlet has no idea that /foo.html was ever requested, which means (a) it can't record the fact that such was the case (a business requirement) and (b) it doesn't know where to redirect. We can see which server was requested in X-Forwarded-Host et al, but looking at the request URL just shows /my/spring/controller.

What we want to achieve, then, is ideally a proxy pass transparent not just to the client but to the receiving servlet as well.

One option is to pass the URL in an environment variable with something like [E=FOO:REQUEST_URI]. However, this increases the complexity of the servlet, and it's not obvious to me whether the request is otherwise unchanged. The point of the servlet is to analyse the request coming from the client, preferably unchanged by Apache. If mod_rewrite changed the request URL, can I trust all other aspects of the request to be unchanged?

Facing authentication error on postgres 9.2 -> dblink functions

Posted: 11 Apr 2021 11:01 PM PDT

I am using postgres 9.2 and when executing function dblink facing a fatal error while trying to execute dblink_connect as follows:

SELECT * FROM dblink_connect('host=127.0.0.1 port=5432 dbname=postgres password=test')

ERROR: could not establish connection DETAIL: FATAL: password authentication failed for user "NETWORK SERVICE"

What this error is related to? Do I need to modify pg_hba.conf file by any chance?

Tomcat mod_jk cluster skip 404 http status

Posted: 11 Apr 2021 11:01 PM PDT

I am trying Tomcat Clustering with mod_jk for months and so far not so bad but facing a problem during deployment. I am using FarmDeployer to copy and deploy the WAR to other nodes in the cluster but most of the time the WAR is not deployed properly and thus leaving the page in 404 error. Even after removing the exploded war directory and again having tomcat extracted the WAR, the browser couldn't render the actual site until I restart/stop the tomcat service on that particular node(of course, http://node-ip/myapp works if redeployed war but not http://site1.mydomain.net once rendered 404 page). And also I think this problem is browser related(tried all the browsers) as the page rendered on other computers when redeployed after 404 error. I Also tried fail_on_status and so it puts the nodes to error stage which ever render 404 http status and redirect to other node BUT on my testing I found that it completely puts those nodes to error state and no request is sent to those nodes until restart though they are back working.

Workers.properties on load balancer:

workers.tomcat_home=/usr/share/tomcat  workers.java_home=/usr/lib/jvm/java-6-openjdk  ps=/  worker.list=cluster,balancer1,status    worker.balancer1.port=8009          worker.balancer1.host=localhost  worker.balancer1.type=ajp13  worker.balancer1.lbfactor=2  worker.balancer1.cache_timeout=20  worker.balancer1.socket_timeout=20  #worker.balancer1.fail_on_status=-404,-503    worker.web1.port=8009          worker.web1.host=192.168.1.8  worker.web1.type=ajp13  worker.web1.lbfactor=4  worker.web1.redirect=web2  worker.web1.cache_timeout=20  worker.web1.socket_timeout=20   #worker.web1.fail_on_status=-404,-503    worker.web2.port=8009          worker.web2.host=192.168.1.9  worker.web2.type=ajp13  worker.web2.lbfactor=4  worker.web2.redirect=web1  worker.web2.cache_timeout=20  worker.web2.socket_timeout=20   #worker.web2.fail_on_status=-404,503    worker.cluster.type=lb  worker.cluster.balance_workers=web1,web2,balancer1  worker.cluster.sticky_session=True  worker.cluster.sticky_session_force=False    # Status worker for managing load balancer  worker.status.type=status  

Anybody has any idea to skip 404 error node and instead hit other properly deployed nodes?. Atleast any tips in configuration if anything so that it renders the actual page after facing 404 having stickey session enabled.

Update:1

Apache Virtual Hosting on Load balancer(192.168.1.5 or balancer1):

<VirtualHost *:80>  ServerName site1.mydomain.net  JkAutoAlias /usr/share/tomcat/webapps/myapp  DocumentRoot /usr/share/tomcat/webapps/myapp    JkMount / cluster  JkMount /* cluster  JkMount /*.jsp cluster    JkUnMount /myapp/*.html cluster    JkUnMount /myapp/*.jpg  cluster    JkUnMount /myapp/*.gif  cluster    JkUnMount /myapp/*.png  cluster    JkUnMount /myapp/*.css  cluster     JkUnMount /abc cluster  JkUnMount /abc/* cluster    JkUnMount /*.html cluster    JkUnMount /*.jpg  cluster    JkUnMount /*.gif  cluster    JkUnMount /*.png  cluster    JkUnMount /*.css  cluster    ProxyRequests Off  ProxyPreserveHost On  ProxyVia On   <Proxy balancer://ajpCluster/>      Order deny,allow      Allow from all    BalancerMember ajp://192.168.1.8:8009/ route=web1 ttl=60 timeout=20 retry=10    BalancerMember ajp://192.168.1.9:8009/ route=web2 ttl=60 timeout=20 retry=10    BalancerMember ajp://192.168.1.5:8009/ route=balancer1 status=+H ttl=60       ProxySet lbmethod=byrequests    ProxySet stickysession=JSESSIONID|jsessionid  </Proxy>    <Location />    ProxyPass balancer://ajpCluster/ nofailover=off    ProxyPassReverse balancer://ajpCluster/  </Location>    </VirtualHost>  

Tomcat virtual Hosting common on all the nodes:

<Host name="localhost"  appBase="webapps"              unpackWARs="true" autoDeploy="true" deployOnStartup="true">   <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"                 prefix="localhost_access_log." suffix=".txt"                 pattern="%h %l %u %t &quot;%r&quot; %s %b" />           </Host>    <Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="false" autoDeploy="false" deployOnStartup="false">  <Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>  <Context path="" docBase="/usr/share/tomcat/webapps/myapps" debug="0" reloadable="true"/>  

NO session replication with tomcat clustering: Disabled for now by commenting <cluster> element as it's consuming lot of memory updating and interacting all time one another in the cluster. For now I have Load balancing and Auto Failover with mod_jk or proxy_ajp BUT with 404 error problem when myapp is unavailable(and available again) as described above. How everybody handling this?

Password mismatch while logging to sql server

Posted: 11 Apr 2021 03:08 PM PDT

Alright, I have a classic asp application and I have a connection string to try to connect to db.

MY connection string looks as follows:

 Provider=SQLOLEDB;Data Source=MYPC\MSSQLSERVER;Initial   Catalog=mydb;database=mydb;User Id=me;Password=123  

Now when I'm accessing db though front-en I get this error:

Microsoft OLE DB Provider for SQL Server error '80040e4d'  Login failed for user 'me'.   

I looked in the sql profiler and I got this:

 Login failed for user 'me'.  Reason: Password did not match that   for the login provided. [CLIENT: <named pipe>]   Error: 18456, State:8.   

What I've tried:

  1. checked 100 times that my password is actually correct.
  2. Tried this: alter login me with check_policy off (Do not even know why I did this)
  3. Enable ALL possible permissions for this account in SSMS.
  4. I've tried this connection string: Provider=SQLOLEDB;Data Source=MYPC\MSSQLSERVER;Initial Catalog=mydb;database=mydb; Integrated Security = SSPI

And I got this error:

  Microsoft OLE DB Provider for SQL Server error '80004005' Cannot open database mydb requested by the login. The login failed.  

IP Conflicts from mikrotik router for multiple ip addresses (that it isnt assigned)

Posted: 11 Apr 2021 08:07 PM PDT

I have a point to point wireless connection using two mikrotiks. When I plug the mikrotik into a switch with just my laptop I get an IP address conflict on my machine no matter what IP I am assigned. Using wireshark i see the conflicts are from the mac address of the mikrotik on the other end of the wireless connection. Why is it conflicting with multiple IP addresses when the router itself is assigned a single IP address with no NAT entries or anything like that? I included a little diagram to help visualize my issue

[me] [mikrotik] --------------[problem mikrotik]----(other equipment on diff subnet)

The problem mikrotik has a wan on the same subnet as my machine. The lan is a different subnet. Any ideas? When I plug the equipment into my network I get IP conflicts on a lot of different servers. Took me forever to isolate it to this mikrotik! Thanks

Oh and all this equipment has been working previously with no known changes made to the configs. It just started acting up recently.

PSU: 20 / 24 pin +4 pin

Posted: 11 Apr 2021 05:59 PM PDT

Looks like I need tro replace the Power Supply on one of the machines, but I am confused with the plugs.

The MB (ASROCK 939N68PV-GLAN) has a 24 pin connector and a separate 4 pin (2x2) header.

The original PSU had a 24 pin header and the 4 pin. The 4 pin seems to be required (PC doesn't reach POST if not connected).

In what scenario do I need both the 24 plug and the 4 plug?


Some more info:

The board originally ran with a PCI Express card (ASUS EN8600GT SILENT). the 2x2 plug seems to be close to being fritzed (discolored, probably to much current). The problems observed were PC not reaching POST on boot, and PCI Express card not detected. Other than that, PC is rock stable.

The original PSU seems to boot ok when using only onboard graphics (I don't want to push my luck, though). I currently only have 20+4 replacement PSU's available.

No comments:

Post a Comment