Saturday, April 24, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Putting WAF on a loadbalancer?

Posted: 24 Apr 2021 09:49 PM PDT

at the moment, our application servers are directly accessable on the internet, like the following picture shows.

one server who takes it all

With this in mind, it would be aful if a server crashes (hardware-failure) or stops doing it work somehow.

To prevent this, i would like to split my application server and put a load balancer in front of it, like the next picture shows. A separate server for the database shows up here, but this is not part of the question, but a note, that database will be extracted from the APS, too.

load balancer and more AP-servers

Whilst the WAF (modsecurity for apache) run on the application servers at the moment, would you put the WAF on the loadbalancer on the new configuration? I thought about using NGINX as a proxy/loadbalancer for it. Or should i leave it on the APS? I am also not sure if there is any influence if the TLS-termination is done by the APS's or on the load-balancer.

Our concerns are most about security, availability and of course performance.

Thank you :)

Why would a particular network card only work with certain IP blocks?

Posted: 24 Apr 2021 09:30 PM PDT

I have a proxmox (debian based) cluster and there are 4 network ports in the nodes, two 10GbE ports and two 1Gbs. One node is behaving in a way I can't explain.

When I set a 172.16.0.0/29 block ip address on one of the 10GbE cards then it can't be pinged from other nodes, however it works with a 192.168.1.0/23 block address.

The 172.16.0.0/29 block address works on the 1Gbps cards however. I don't understand how the physical medium is making the difference to the routing.

What linux distro support Host Managed HDD by file system?

Posted: 24 Apr 2021 09:59 PM PDT

According to this Seagate presentation there are some ongoing (?) efforts targeted toward modification of ext4 file system introducing SMRFS -EXT4 - support of host managed hard drives. The goal is to provide layer that will hide specifics of ZAC commands from applications (I believe). There is also this document that claims that "As of kernel v 4.7... Host managed drives are exposed as SG node - No block device file". What does it mean? maybe these document are outdated and ext4 (or other common linux file system) has been added support for host aware HDD. What linux distro support Host Managed HDD by file system? If such support exists - What steps are needed to get Host Managed HDD up and running without changes in applications (where file system hides all specifics)? General applications like DB are my concern - not log style.

Also there is such video (SDC2020: Improve Distributed Storage System TCO with Host-Managed SMR HDDs) that claims that starting from 4.10 linux kernel f2fs supports host managed drives already - did you used such approach? Maybe f2fs is not best match for random operations but I hope f2fs can fulfill such tasks with acceptable performance (where reading is dominant)

Is there any self-hosted “notes only” CalDav server or exchange server?

Posted: 24 Apr 2021 06:24 PM PDT

I'm trying to build a Linux server that replace iCloud notes sync on iPhone and Mac, after google I know that the notes sync depends on CalDav.

So is there any open source server side app that provides iCloud-like notes sync on Linux? With or without mail, calendar... doesn't matter.

I'm open for any suggestions, thank you all.

Unknown private IP addresses tried to log in to my router's admin panel

Posted: 24 Apr 2021 03:27 PM PDT

I noticed some weird traffic in my router's admin panel logs:

2021-03-25 06:27:19 [Critical][Config-Log] Terminal:WEB(10.181.189.71),Result:Fail,Type:Login,Username:-  2021-03-25 06:27:22 [Critical][Config-Log] Terminal:WEB(10.181.189.71),Result:Fail,Type:Login,Username:-  2021-03-25 06:27:26 [Critical][Config-Log] Terminal:WEB(10.181.189.71),Result:Success,Type:Login,Username:root  2021-03-25 06:27:50 [Critical][Config-Log] Terminal:WEB(10.181.189.71),Result:Success,Type:Logout,Username:root  2021-03-28 11:43:15 [Critical][Config-Log] Terminal:WEB(10.181.189.62),Result:Fail,Type:Login,Username:-  2021-03-28 11:43:19 [Critical][Config-Log] Terminal:WEB(10.181.189.62),Result:Fail,Type:Login,Username:-  2021-03-28 11:43:27 [Critical][Config-Log] Terminal:WEB(10.181.189.62),Result:Fail,Type:Login,Username:-  2021-03-28 11:43:27 [Error][Alarm-Log] AlarmID:104032,AlarmLevel:Error,Administrator exceeded maximum number of attempted logins.Terminal:WEB  2021-03-28 11:44:27 [Error][Alarm-Log] AlarmID:104519,AlarmLevel:Error,[WEB]unlocked!  2021-04-18 04:54:06 [Critical][Config-Log] Terminal:WEB(10.181.189.61),Result:Fail,Type:Login,Username:-  2021-04-18 04:54:08 [Critical][Config-Log] Terminal:WEB(10.181.189.61),Result:Fail,Type:Login,Username:-  2021-04-18 04:54:11 [Critical][Config-Log] Terminal:WEB(10.181.189.61),Result:Fail,Type:Login,Username:-  2021-04-18 04:54:11 [Error][Alarm-Log] AlarmID:104032,AlarmLevel:Error,Administrator exceeded maximum number of attempted logins.Terminal:WEB  2021-04-18 04:55:11 [Error][Alarm-Log] AlarmID:104519,AlarmLevel:Error,[WEB]unlocked!  

The address of my router is 192.168.100.1 and it leases addresses only for 192.168.100.0/24, yet some unknown A class private IPs that were never connected to my network tried to access the panel in the early hours. The event logging is quite verbose so each operation of making changes to the configuration (including deleting entries) is explicitly mentioned, however in this case there were only login attempts, one of which was successful.

Is it possible that these were my ISP's attempts or am I a target of the attack?

Docker - Adding new Wordpress container to existing SQL database container

Posted: 24 Apr 2021 03:07 PM PDT

I'm trying to understand or find information on how I would connect a new Wordpress container to an existing MariaDB container. I'm missing something. I can add a Wordpress instance while also creating the MariaDB container. See below.

version: '3.7'    services:      wordpress:      image: wordpress      restart: always      ports:        - 8282:80      environment:        WORDPRESS_DB_HOST: db        WORDPRESS_DB_USER: exampleuser        WORDPRESS_DB_PASSWORD: examplepass        WORDPRESS_DB_NAME: exampledb      volumes:        - ./:/var/www/html      links:        - db:db      db:      image: mariadb:latest      restart: always      container_name: mariadb      environment:        MYSQL_DATABASE: exampledb        MYSQL_USER: exampleuser        MYSQL_PASSWORD: examplepass        MYSQL_ROOT_PASSWORD: password      volumes:        - db:/var/lib/mysql        phpmyadmin:      image: phpmyadmin/phpmyadmin      restart: always      container_name: phpmyadmin      ports:        - "8081:80"      environment:         PMA_HOST: mariadb          volumes:    wordpress:    db:    phpmyadmin:  

After that has spun up and is good to go, I then attempt another docker-compose.yml (see below) and I cannot get the Wordpress instance to connect to the SQL instance.

version: '3.7'    services:      wordpress:      image: wordpress      restart: always      container_name: wordup      ports:        - 8283:80      environment:        WORDPRESS_DB_HOST: 172.20.0.3        WORDPRESS_DB_USER: username        WORDPRESS_DB_PASSWORD: password        WORDPRESS_DB_NAME: wp2      volumes:        - ./:/var/www/html                volumes:    wp2:  

How would I point the new WP instance to the database that I created on the MariaDB container?

Thanks!

Server Side Events / Chrome Errors

Posted: 24 Apr 2021 02:57 PM PDT

I'm experiencing a console error on Chrome but not Firefox after every few minutes but events are still coming through - this continues on for a while until eventually Chrome logs a 520 error and the event source is broken completely.

I tested this locally and no issues at all in either browser. The difference in our production environment is that we are behind an nginx proxy and CloudFlare security.

These are the headers I'm using in the backend:

  'Connection': 'keep-alive',    'Content-Type': 'text/event-stream',    'Cache-Control': 'no-cache',    'X-Accel-Buffering': 'no'  

This is the code in the backend (node JS)

res.writeHead(200, headers);  res.setTimeout(1000 * 60 * 120)  res.flushHeaders()  req.on('close', () => {          res.end()  })  res.on('timeout', () => res.end())  

And these are the nginx configurations I have tried:

proxy_set_header Connection '';  proxy_http_version 1.1;  chunked_transfer_encoding off;  proxy_buffering off;  proxy_cache off;  http2_max_field_size 64k;  http2_max_header_size 512k;  proxy_read_timeout 12h;  

Front-end code

 let source = new EventSource('/events')   source.onerror = (e) => console.log(e)   source.onmessage = (msg) => console.log(msg)  

Chrome Console: enter image description here

Appreciate any advice. Thanks

NAT - Two hosts with different private IP addresses but same public IP

Posted: 24 Apr 2021 03:12 PM PDT

I'm studying the NAT-process and I've a doubt about addressing. When the NAT translates the private IP address to a public IP address, it can translate, if properly configurated, different private IP to the same public IP . How is it possible that two hosts use the same public IP? I mean, if I ping that IP, would I get a response from both of them?

Does IPTable/Netfilter eat up ephemeral ports

Posted: 24 Apr 2021 01:24 PM PDT

I wanted to understand more about Netfilter architecture. Let's say I have a machine with 3 IPs. I configure one IP to SNAT to all 3 IPs in round robin. Can I open more than 64k connections from that IP?

Remote Desktop is not working from outside network

Posted: 24 Apr 2021 03:29 PM PDT

I have windows server 2016 installed to my PC. I managed roles and went to "Remote Desktop Service Installation" to set up environment as virtual machine based environment. In my advanced system settings I have checked "Allow Remote Connections To This Computer" and also UNchecked "Allow connections only from computers running Remote Desktop with Network Level Authentication (recommended)".

I have Static IP Address given from my ISP

I routed static IP to my Server's Local IP (which is static 192.168.0.2)

  • When I am on local network and try connecting to 192.168.0.2 it is working
  • When I am on local network and try connecting to static IP it is working
  • When I go to other network and try connecting to static IP it is NOT working
  • When I remove route from my router it is NOT working locally from static IP (worked before removing route)

Based on this I think my routing is working and pointing to PC.

I suspected on firewall settings and I turned off all firewalls I found on router and Server PC but it is still not working.

I have tried all other solutions with firewall rules I found on internet but none is working and to be honest I do not even know what I did so I could write here.

EDIT: Just found out that routing might be a problem. Static IP is working locally and I thought it was okay but then I changed my static IP part of routing from XXX.XXX.XXX.19 to XXX.XXX.XXX.18 and it still worked so I came to conclusion that routing locally works with any IP.

Will I be blocking the IP of some google related service?

Posted: 24 Apr 2021 03:43 PM PDT

In my sites I have created a script that sends me an email every time a new ip claiming to be google visits the site.

When I see the email I go to check (for example on whois.com) if the ip that claims to be google is really google, and if not, I block it with the firewall.

Normally I find one or two fake googles a week, but in the last few days google has been igniting my server.

 103432 Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.130 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)  1022802 Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.80 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)  1063366 Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)  1178083 Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.127 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)  

(number of google accesses on my server in the last 24 hours)

Something is happening and google is entering my server much more than usual and along with the google accesses, the "fake google" has increased a lot. But it is strange that they have increased together...

Will I be blocking the IP of some google service?

(These are those of the last 24 hours)

$ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.203.11.230' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.80.104.189' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.148.124.171' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='37.44.196.194' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='88.218.45.98' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='46.161.60.168' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.60.21.63' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='85.202.195.178' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.148.124.139' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='84.54.58.80' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.148.234.198' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.119.46.111' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='195.133.24.218' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.142.55.37' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.87.112.182' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.93.195.206' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.156.125.92' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.119.46.82' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.140.206.107' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.66.208.145' reject"  $ sudo firewall-cmd --reload  

they seem to come almost all from the same sources (usually whois.com results are different from each other).

So the doubt comes, am I blocking something that is part of google? Like lighthouse, pagespeed or something else that is still part of the google scans? Or are they just the ip of scammers who pretend to be google to hack my server or clone my sites?

Could it be that google related services claim to be google in the HTTP_USER_AGENT, and then there is nothing about google by verifying domain ownership on whois.com? (Even if google itself says that the only way to actually verify that an ip belongs to them is to verify its ownership, for example with a reverse dns?)

Can you help me understand who they are, and what should I do with these IPs?

Thanks

----------------------update------------------

Perhaps we are going a little off topic, perhaps because some information is missing.

  • I manually check every single ip that claims to be google and if it is not google (in dns) I block it with the firewall. I don't use an automatic reverse proxy, because it is actually too long and heavy.

  • I use fail2ban, I have many filters that block many scam ip's, these are the ones that survive my fail2ban filters.

  • I don't want to block or hide my content, I want it to be seen and understood by search engines, but possibly not cloned entire sites.

  • The "fake google" IPs that I usually find on my server belong to sql injection attempts, inserting advertising comments on my sites, and unfortunately in the past I have also found entire sites cloned. (later I added some tricks that prevent cloning).

From further checks it appears that these new "fake google" IPs all visit the same site on my server, each on a different page, and enter my server only once.

Once, I think that's why they manage to survive my fail2ban.

Everything leads to think of an attempt at cloning, even if it is strange that it is done with all these ip (no?).

But the thing that most gives me to think is the fact that the intrusions by these "fake google" are exponentially increased together with the scans of the "real google" IPs.

And this makes me think that somehow they are connected, even if I can't find a connection between real and fake google IPs.

The help I would like from you is to actually understand who they are? And in case they are just scammers, how to block them.

These are today's IPs:

$ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='85.202.194.0/24' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='85.202.194.214' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='62.76.232.248' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.193.13.168' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.104.11.51' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.58.68.26' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='5.133.123.198' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.140.207.224' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.192.28.155' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.142.52.178' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.89.100.216' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.93.192.236' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='195.133.24.190' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.140.206.79' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.68.184.215' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.250.46.119' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='31.40.249.208' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.233.187.174' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='37.44.253.226' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='93.177.118.162' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='37.44.253.229' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='94.158.22.50' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.171.226.158' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.250.45.51' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.193.15.194' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.99.26.102' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.142.55.48' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.14.194.78' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='77.220.193.69' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.203.10.224' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.202.82.135' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.31.126.107' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.142.54.110' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='77.220.194.114' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.151.189.82' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.60.21.72' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='195.133.24.68' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.250.45.43' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.88.100.212' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.193.14.102' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.80.104.253' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.58.68.204' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.58.33.128' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.68.247.216' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.171.227.121' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.192.28.169' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='77.243.91.234' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.87.116.145' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.58.33.192' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='45.80.104.160' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='91.222.239.251' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.87.52.175' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.68.185.126' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.31.126.78' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.119.46.226' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='141.98.87.44' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.99.26.51' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='31.40.248.142' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='212.193.14.15' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.171.252.202' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.124.9.221' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='194.58.34.69' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='83.142.52.224' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='77.220.193.239' reject"  $ sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='185.233.187.46' reject"  

Mailbox file of mutt suddenly disappeared

Posted: 24 Apr 2021 07:27 PM PDT

I have recently seen that my entire mailbox with mutt has suddenly disappeared. I don't know how to backup, if it is possible ?

I am on Linux Debian 10.

The sent file of all mails sent has not disappeared, that's weird.

Maybe there are traces of this removing into files /var/log/mail* ?

If someone could know what's happened.

Equipment required for AWS outpost

Posted: 24 Apr 2021 05:58 PM PDT

Have anybody configured network equipment for AWS outpost? The demarcation point is the patch pannel in AWS rack. Requirment is to configure equipment back from the rack to internet. The rack will be put in DC. Based on documentation I found on internet - 6 Nexus switches are required. I think such configuration is overkill for us.

There will be not any additional devices connected in DC to the outpost (most likely) and we dont have AS, neither think to get one. Its just one rack so I dont think I need 6 Nexus switches with full fail-over capability on eatch layer.

I am thinking about 2 switches on access-layer, and colapse core connected to firewall. Not sure what equipment I would need to achive that especially the BGP part. Cisco the best as I know it to some extend. Any ideas?

Cisco Extends Enterprise-grade Datacenter Networking To AWS Outpost

What other switches/eqipment could I use?

Router MAC address does not match with the actual one

Posted: 24 Apr 2021 02:39 PM PDT

from my PC Windows 10 I am doing an arp -a to lookup the MAC of my router/modem. I get something like:

 Internet Address      Physical Address      Type    192.168.0.1           90-5c-66-27-eb-70     dynamic  

Now, such MAC is none of the 3x MAC addresses printed behind the device. So I am wondering why I am seeing a different MAC?

Real MAC: 90:5C:44:19:8F:79 Connection from my PC is WiFi.

thanks.

Windows Server 2019 VPN Error 8007042a After KB4480116

Posted: 24 Apr 2021 04:30 PM PDT

I am running Windows Server 2019 datacenter with router and remote access for VPN access. Everything was working fine until I installed the January 8, 2019 update: https://support.microsoft.com/en-us/help/4480116

After the update, I was unable to start the routing and remote access service and got the following error:

VPN Error

I can confirm that after I uninstalled the update, the service was able to start. Anyone else having this problem?

Can not find script file "Z:\Scripts\LiteTouch.wsf"

Posted: 24 Apr 2021 02:39 PM PDT

I installed and configured MDT 2013 yesterday on server 2012. I have a WDS also. I am able to create a WIM file. I did PXE boot into a test virtual machine and I was able to install Windows 10. My MDT is also on vmware ESXI. MDT server is joined to my domain.

I created a task sequence with "sysprep and capture". On the vm that I want to capture I mapped the deployment share and I tried to run the "LiteTouch.vbs" I got the following error:

Can not find script file "Z:\Scripts\LiteTouch.wsf"  

I am confused because "LiteTouch.wsf" is just below the vbs file.

This is the first time I am trying to capture an image on the newly built MDT.

Please let me know what I am doing wrong or what I can do to resolve this.

Problems with DNS and IPv6 on Server 2012 and 2016

Posted: 24 Apr 2021 07:22 PM PDT

I manage hundreds of servers for many customers. Most are SMB segment, having 1 to 3 servers per customer max. In past few weeks I get more and more frequent DNS errors on random domain controllers, from 2008R2 to 2016. Simply put, DC does not resolve DNS anymore. This happened on some dozen of servers lately, and I haven't figured out the cause yet.

Weird is, that for example, on same premises, 2 VMs, 2 domain controllers for 2 different companies, each with 15 users. Same ISP, same router, same switch. 1 DC works OK, no problems, while 2nd DC cannot resolve DNS anymore:

On server 1 problem local DNS... but nslookup to 8.8.8.8 works!?:

C:\Users\Administrator>nslookup  Default Server:  UnKnown  Address:  ::1    > www.google.com  Server:  UnKnown  Address:  ::1    DNS request timed out.      timeout was 2 seconds.  DNS request timed out.      timeout was 2 seconds.  *** Request to UnKnown timed-out  > server 8.8.8.8  Default Server:  google-public-dns-a.google.com  Address:  8.8.8.8    > www.google.com  Server:  google-public-dns-a.google.com  Address:  8.8.8.8    Non-authoritative answer:  Name:    www.google.com  Addresses:  2a00:1450:4001:81c::2004            172.217.16.68    >  

On server 2 no problems:

C:\Users\Administrator>nslookup www.google.com  Server:  localhost  Address:  ::1    Non-authoritative answer:  Name:    www.google.com  Addresses:  2a00:1450:4006:802::2004            216.58.206.132  

Both are AD DC in single-domain setup, DNS configured with public forwarders, DC DNS points to itself only. IPv4 and IPv6 enabled on servers, but IPv6 is disabled on router. Did not touch any of those servers for past few months.

Did MS change anything? I do not remember DNS ever before switched to IPv6....why did it switch now? And why it works on one server and not on the other, still they are both the same (actually, same deployment, just configured for 2 different domains).

TTFB on AWS EC2 Instance is Very Slow

Posted: 24 Apr 2021 06:06 PM PDT

I'm very new to hosting/server administration and my employer ask me to use this chance to learn and find a way to update our current hosting setup.

Our current setup was one EC2 instance that hosted multisite, it was being manage through Cpanel.

I did some research on AWS and this was new setup I'm trying to implement. On a high level this is the setup: We have a Load Balancer to round robin between two EC2 instances, the files for the website(s) is on a Network File System. The instances are also connected to a Relational Database. I might be over my head on this, but if there's a better solution please let me know.

Mini question on Load Balancer:

My Load balancer is listening on port 80 and 443. I added multiple SSL Certification into the https listener and it should be able to determine which certificate through SNI. Both of them are forwarding it to the same target group which the health check protocol is http. Something doesn't seem to sit well with me for this. If traffic come from port 443 connection wouldn't I just forward it to one of instance through port 80 instead of 443? It seem to be working for a very simple website (it just redirect to another website) though, so I didn't put too much thought on it.

This is the test result from webpagetest.com. I'm not sure on how to fix the TTFB given the ideal time is < 100ms where mine is 3036ms.

I think it might be because the EC2 instance I'm using now is the micro (free tier), so it might just be hardware issue. But given I only have one website running right now, I don't think this is an issue?

Or it might be because I've set the virtual host wrong? I've setup a virtual host configuration file like so:

<VirtualHost *:80>  ServerName purpose-built.com  ServerAlias www.purpose-built.com  DocumentRoot /var/www/html/efs-mount-point/purpose-built/public_html  ServerAdmin admin@purpose-built.com  <Directory /var/www/html/efs-mount-point/purpose-built/public_html>  AllowOverride All  </Directory>  </VirtualHost>  

Any direction on how to tackle this issue would be appreciated.

A CloudFormation stack hangs on creation or deletion when custom resource lambda is broken

Posted: 24 Apr 2021 06:06 PM PDT

I have noticed that if a template contains custom resource lambda, which is broken (there is a runtime error, or it doesn't properly send a response body), then the CloudFormation stack hangs on the creation phase on this particular resource.

When you try to forcibly delete stack - it hangs on the same custom resource (because it invokes the same lambda on delete and receives the same error).

It takes 1 hour to receive "DELETE_FAILED" state, after what you can forcibly delete stack ignoring this error with custom resource lambda.

My question: is it possible somehow avoid or reduce this huge (1 hour) delay?

And isn't such behavior a bug in CloudFormation? Because from my point of view, if custom lambda failed with error, there is no sense to wait.

QNAP TS-419 QTS 4.1.2 Using OpenVPN to Connect as a VPN Client

Posted: 24 Apr 2021 04:07 PM PDT

I'm following up on a similar user request: QNAP TS-419p as a VPN Gateway?

I have a QNAP TS-419 II, with Firmware 4.1.2. I have a VPN service provider than uses OpenVPN to connect to their VPN services. Within the files given I have a ca.crt, myUser.crt, myUser.key, and a bunch of opvn configuration files for proxied or fully routed vpn connections.

I have rather followed these instructions: http://support.purevpn.com/qnap-nas-qts-4-x-openvpn-setup-guide

When I click connect and failed after a few minutes. The following are the output within the log:

Sun Aug  9 00:40:02 2015 OpenVPN 2.2.1-8+deb7u3 arm-none-linux-gnueabi [SSL] [LZO2] [EPOLL] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Aug  4 2015  Sun Aug  9 00:40:02 2015 WARNING: file '/share/MD0_DATA/Download/vpn/pass1.txt' is group or others accessible  Sun Aug  9 00:40:02 2015 WARNING: No server certificate verification method has been enabled.  See http://openvpn.net/howto.html#mitm for more info.  Sun Aug  9 00:40:02 2015 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts  Sun Aug  9 00:40:02 2015 WARNING: file '/share/MD0_DATA/Download/vpn/vpn.key' is group or others accessible  Sun Aug  9 00:40:02 2015 Control Channel Authentication: using '/share/MD0_DATA/Download/vpn/vpn.key' as a free-form passphrase file  Sun Aug  9 00:40:02 2015 Key file '/share/MD0_DATA/Download/vpn/vpn.key' used in --tls-auth contains insufficient key material [keys found=1 required=2] -- try generating a new key file with 'openvpn --genkey --secret [file]', or use the existing key file in bidirectional mode by specifying --tls-auth without a key direction parameter  

For the Warnings, I understand it is a file permission issue, can someone tell me what it should be?

For the Note and OpenVPN security suggestion, does anyone know what I am suppose to do from that point on to get it working properly?

My Client configuration file:

client  dev tun  proto tcp  tls-client  remote <remote IP> 80  persist-key  persist-tun  ca /share/MD0_DATA/Download/vpn/ca.crt  tls-auth /share/MD0_DATA/Download/vpn/key.key  cert /share/MD0_DATA/Download/vpn/cert.crt  cipher AES-256-CBC  comp-lzo  auth-user-pass /share/MD0_DATA/Download/vpn/pass1.txt  auth-retry interact  script-security 3  log-append /share/MD0_DATA/Download/vpn/openvpn.log  verb 3  mute 20  keepalive 10 60  writepid /var/run/openvpn.client.pid  up /etc/openvpn/openvpn_up  down /etc/openvpn/openvpn_down  explicit-exit-notify 2  ifconfig-nowarn  auth-nocache  

Identifying mailboxes with corruption Exchange 2013

Posted: 24 Apr 2021 08:07 PM PDT

We're preparing to rid ourselves of Exchange on-premise and are migrating from an outsourced hosting provider to go to Office 365. We're running a fully patched version of Exchange 2013, atop Server 2012, also fully patched.

During the migration of test mailboxes we found that many mailboxes across multiple databases are corrupted. More information about the cause can be found HERE. Essentially: the SAN which stores our Exchange VM is oversubscribed and routinely has I/O wait in excess of 5 seconds, and sustained read speeds rarely pass 500KBps.

The slow speeds would be enough to cause significant wasted time during the migration, but when mailboxes with corruption are encountered, migrating 1GB of data goes from a 2-3 hour affair to a 10-20 hour one. Each of the mailboxes that have issues (that I've found so far) give messages similar to the below when checked against get-mailboxstatistics:

WARNING: The object <GUID> has been corrupted, and it's in an inconsistent state. The following validation errors happened:  WARNING: Cannot extract the property value of 'DeletedItemCount'. Source: PropTag(DeletedMsgCount), PropType(Int), RawValue(-2), RawValueType(System.Int32). Target:  Type(System.Nullable`1[System.UInt32]), IsMultiValued(False). Error Details: <n/a>  

Running New-MailboxRepairRequest against all databases identified some corruption and repaired it, but not all of it. I can't seem to find a way to get Get-MailboxStatistics to log the fact that there is something broken in each of these mailboxes, though I'm sure there is one. Moving mailboxes from one database to another seems to fix the problem. We've got ~50 DBs, and about 50 users per DB, so going through this manually is out.

What I want to do is, via PowerShell (excuse lazy pseudo code please):

foreach ($mailbox in $database){      if get-mailboxstatisics -eq $corrupted {          move $mailbox to $otherdb          wait          move $mailbox back to $database}      }  

However, I can't figure out how to catch the "Warning: this is broken" text from Get-MailboxStatistics, and the resultant object returned doesn't have anything in it that shows it's broken.

Do I just need to catch the warning and assume everything that complains about inconsistencies can be fixed this way, then go back and check the list of mailboxes that actually have an issue after they've been moved out and back to see if things are still broken? Is there a better way to do what I need to do?

Replacing the SAN is out of the realm of possibilities, so is fixing any other underlying causes.

Permissions for SCCM Service Account

Posted: 24 Apr 2021 08:07 PM PDT

Does anyone know what is the least privileged Active Directory security group needed for the MS-SCCM 2012 service account to do software updates via Configuration Manager? ConfigMgr runs fine with the account being in Domain Admins, but I'd like to give it less permission. I've heard that it could be made a local admin on every target device through group policy, but I'm hoping that there's a better solution. Digging through TechNet has not yielded surprisingly little. Thanks

elastix cdr stop working

Posted: 24 Apr 2021 05:03 PM PDT

CDR was working before 19 march. Unfortunately i dont remember what kind of changes i made to configuration, but this exactly not changes to CDR config.

elastix 2.4.0 asterisk 11.7.0 mysql 5.0.95

elastix*CLI> cdr show status    Call Detail Record (CDR) settings  ----------------------------------    Logging:                    Disabled    Mode:                       Simple    /etc/asterisk/cdr.conf  [general]  enable=yes  unanswered = yes      /etc/asterisk/cdr_mysql.conf  [global]  hostname = localhost  dbname=asteriskcdrdb  password = *MYPASSWROD*  user = asteriskcdruser  userfield=1  ;port=3306  ;sock=/tmp/mysql.sock  loguniqueid=yes      mysql> SHOW GRANTS FOR 'asteriskcdruser'@'localhost';  +-----------------------------------------------------------------------------------------------+  | Grants for asteriskcdruser@localhost                                                          |  +-----------------------------------------------------------------------------------------------+  | GRANT USAGE ON *.* TO 'asteriskcdruser'@'localhost' IDENTIFIED BY PASSWORD 'HASHHERE' |  | GRANT ALL PRIVILEGES ON `asteriskcdrdb`.* TO 'asteriskcdruser'@'localhost'                    |  +-----------------------------------------------------------------------------------------------+  2 rows in set (0.00 sec)  

How to view all ssl certificates in a bundle?

Posted: 24 Apr 2021 01:23 PM PDT

I have a certificate bundle .crt file.

doing openssl x509 -in bundle.crt -text -noout only shows the root certificate.

how do i see all the other certificates?

Clear sendmail MX server cache or DNS cache issue?

Posted: 24 Apr 2021 09:39 PM PDT

I have recently switched our mail server to a new location by updating the appropriate DNS MX records in the domain. Everythings seems to work and I am getting the email on the new server already.

However, some applications on a web server that is using sendmail to send messages, are still sending messages to the old server. I assume there is either a cache meachanism in sendmail that "remembers" the old MX server or some sort of DNS cache in place (no dnsmasq or nscd installed).

How do I make the web server's sendmail use the new MX server?

Ubuntu 12.10

SSH Socks Proxy wiith iptables REDIRECT

Posted: 24 Apr 2021 09:39 PM PDT

I have googled and haven`t found the answer on my question. Help me please. There are two servers:

serverA with public IP 12.0.0.10 and an private IP 10.0.0.5  serverB with public IP 20.0.0.11  

I have setup SOCKS proxy on serverB to serverA:

ssh -D20.0.0.11:2222 root@12.0.0.10  

So when on my local machine in a browser i specify SOCKS proxy 20.0.0.11:2222 (serverB:2222) as external IP while browsing i get 12.0.0.10 (serverA IP). That is ok.

As well if i go onto http://10.0.0.5 (serverA private IP) it is also reachable. That is what i need. I want to make servers A private IP to be available through servers B public IP on certain ports but without specifying SOCKS in my browser.

I could use ssh port forward but the problem is - i need to forward many ports and do not know which exactly - i know only the range.

So when i connect to 20.0.0.11 to any port , for example, from 3000:4000 range, i want that traffic to be redirected to 10.0.0.5 on the same port. That is why i`ve decided maybe SOCKS proxy via SSH and iptables REDIRECT could help me.

Client -> serverBPublicIP (any port from range 3000:4000) -> serverAPublicIP -> serverAPrivateIP (the port was requested on serverBPublicIP)

On serverB i do:

ssh -D20.0.0.11:2222 root@12.0.0.10  iptables -t nat -A PREROUTING -d 20.0.0.11 -p tcp --dport 3000:4000 -j REDIRECT --to-port 2222  

But that does not work - when i telnet on 20.0.0.11:3001 for example i do not see any proxied traffic on the serverA. What should i do else? I have tried tcpsocks like this (in example i am telneting to 20.0.0.11:3001)

Client -> 20.0.0.11:3001 -> iptables REDIRECT from 3001 --to-port 1111 -> tcpsocks from 1111 to 2222 -> SOCKS proxy from serverB to serverA on port 2222 -> serverA  

But i do not know what to do with the traffic on serverA. How to route it to its private IP. Help me please. I know, VPN removes all the hell i am trying to create, but i have no ability to use tun/tap device. It is disabled.

Nginx + php-fpm - recv() error

Posted: 24 Apr 2021 05:03 PM PDT

I get the follow error in the nginx log

[error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]"

I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04

I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail.

Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it.

I have a feeling this is something to do with php. But I can't figure it out.

Any suggestions of where to even start looking?

Update: and the php error log is silent. No record of anything going wrong

Deleting All Partitions From the Command Line

Posted: 24 Apr 2021 07:24 PM PDT

How do you delete all partitions on a device from the command line on Linux (specifically Ubuntu)? I tried looking at fdisk, but it presents an interactive prompt. I'm looking for a single command, which I can give a device path (e.g. /dev/sda) and it'll delete the ext4, linux-swap, and whatever other partitions it finds. Essentially, this would be the same thing as if I were to open GParted, and manually select and delete all partitions. This seems fairly simple, but unfortunately, I haven't been able to find anything through Google.

IIS 7 - website works without www, but not with

Posted: 24 Apr 2021 09:50 PM PDT

I have a personal website that I am hosting at home (using web server 2008) that works without www, but not with. Can someone PLEASE walk me through setting up CNAME, A, and what not. I have zero experience setting up iis...

No comments:

Post a Comment