Thursday, June 2, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Applying IIS rewrite rules from multiple web.configs for single request

Posted: 02 Jun 2022 07:34 AM PDT

I am trying to configure a directory structure in an IIS website with rewrite rules applying at various levels. For example, consider the following structure:

Default Web Site  ├─ web.config  └─ v1     ├─ web.config     └─ wwwroot        └─ hello.txt  

I want to be able to access hello.txt through http://localhost/hello.txt. I have configured the web.config at the website root level like this:

<?xml version="1.0" encoding="utf-8"?>  <configuration>    <location path="." inheritInChildApplications="false">      <system.webServer>        <rewrite>          <rules>            <rule name="Rewrite to v1" stopProcessing="false">              <match url="^(?!v1).*" />              <action type="Rewrite" url="v1\{R:0}" />            </rule>          </rules>        </rewrite>      </system.webServer>    </location>  </configuration>  

And I have configured the web.config in the v1 directory like this:

<?xml version="1.0" encoding="utf-8"?>  <configuration>    <location path="." inheritInChildApplications="false">      <system.webServer>        <rewrite>          <rules>            <rule name="Rewrite to wwwroot" stopProcessing="false">              <match url="^(?!wwwroot).*" />              <action type="Rewrite" url="wwwroot\{R:0}" />            </rule>          </rules>        </rewrite>      </system.webServer>    </location>  </configuration>  

The one-level rewrite works when I access http://localhost/v1/hello.txt, but the two-level rewrite doesn't work for http://localhost/hello.txt. The IIS error page shows that the request is still resolving to the physical path …\v1\hello.txt rather than …\v1\wwwroot\hello.txt; see https://i.stack.imgur.com/KjdH5.png.

Is there a way of getting this to work, or it is a limitation of the IIS rewrite module? Note that changing the rewrites to redirects allows the file to be served successfully.

I'm aware that I can make the web.config at the website root level reference the inner wwwroot directory directly (through url="v1\wwwroot\{R:0}"). However, I don't want to do this. I will eventually be extending the outer web.config to support other versions (v2, v3, …), which may or may not also have a wwwroot subdirectory, so it would be much cleaner to maintain any version-specific rewrites to inner subdirectories in the version-specific web.config.

I've tried using Failed Request Tracing. The outer rule, Rewrite to v1, is logged to run successfully. There is no mention of the inner rule, Rewrite to wwwroot.

rsyslog rewrite hostname before relay

Posted: 02 Jun 2022 07:20 AM PDT

I am setting up rsyslog in a multitenant environment to relay to a central server. Because it is multitenanted, I would like to prefix the hostname from the first rsyslog server with a customer specific prepend before relaying on to the central server. I had planned to set the prefix manually, however, the prefix is configured in another file on the server, and if this could be gathered from that file, that would be even better.

Because the first server will be relaying from multiple hosts, the prepend has to be a dynamic rewrite that includes the original hostname rather than a hard-coded overwrite of the same hostname for all entries, which I've seen in some examples.

Ideally, what I am trying do do is summarised by the following pseudocode:

ruleset(name="myrule"){      set $hostname = "<prefix>-%HOSTNAME%"      action(type="omfwd" target="remote-ip")  }  

I will be responsible for both the intermediate relay and the central server, but each relay can host multiple customers, so I don't think that the rewrite can be done on the central server, but I have full control of both layers. Each customer is connected via a dedicated interface and I was planning for a separate ruleset attached to an input configured for each interface and the ruleset to include the customer specific prefix. For this reason, I think the config needs to be on the relay, but if there's a different way, then I am willing to try anything that meets the end-goal of making events customer-identifiable.

The reason for wanting to use the hostname rewrite is because this is in-line with how other tools are configured in the environment and it is highly desirable to keep a homogenous setup. However, if that is not possible, another method may be considered if the first is not technically feasible.

For example, each relay is connected to multiple customers via separate routing tables and end-client has a different hostname, e.g. site1-sw1 or site2-rtr2. However, the problem then is that the customer prefix is not in the name as that is our reference for knowing what customer the device relates to. In other systems we rename these names as cust1-site1-sw1 and cust1-site2-rtr2, especially as there may be a cust2-site2-rtr2, for example. We want the equivalent behaviour in syslog.

What is the correct way to do this?

disable deprecated resource automatic conversion

Posted: 02 Jun 2022 07:10 AM PDT

I'm trying to purposefully create K8S resources with deprecated apiVersion for test purposes, but keep ending up with a converted resource to the non-deprecated apiVersion. I don't understand why it's happening and can't find any discussion/topic on how to force K8S API to respect my resource manifest.

Does anyone know how it could be done? Or even why it's acting like that?

Here is the resource I'm trying to create:

apiVersion: networking.k8s.io/v1beta1  kind: Ingress  metadata:    name: deprecated-ingress  spec:    rules:      - host: example.com        http:          paths:            - path: /*              pathType: ImplementationSpecific              backend:                serviceName: test                servicePort: 80  

I tried to create this resource on several clusters with the following versions:

  • 1.16
  • 1.21
  • 1.23

For each cluster, I was using the corresponding kubectl version. This is an indicator that the « conversion » is not happening client-side.

This is the created resource on cluster, as you can see, the apiVersion is not the same…

apiVersion: networking.k8s.io/v1                                                                                                                                                                                  kind: Ingress                                                                                                                                                                                                     metadata:                                                                                                                                                                                                           annotations:                                                                                                                                                                                                        kubectl.kubernetes.io/last-applied-configuration: |                                                                                                                                                                 {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"deprecated-ingress","namespace":"test1"},"spec":{"rules":[{"host":"example.com","http":{"paths":[{"backen  d":{"serviceName":"test","servicePort":80},"path":"/*","pathType":"ImplementationSpecific"}]}}]}}                                                                                                               creationTimestamp: "2022-06-02T12:06:04Z"                                                                                                                                                                         finalizers:                                                                                                                                                                                                       - networking.gke.io/ingress-finalizer-V2                                                                                                                                                                          generation: 1                                                                                                                                                                                                     managedFields:                                                                                                                                                                                                    - apiVersion: networking.k8s.io/v1beta1                                                                                                                                                                             fieldsType: FieldsV1                                                                                                                                                                                              fieldsV1:                                                                                                                                                                                                           f:metadata:                                                                                                                                                                                                         f:finalizers:                                                                                                                                                                                                       .: {}                                                                                                                                                                                                             v:"networking.gke.io/ingress-finalizer-V2": {}      manager: glbc      operation: Update      time: "2022-06-02T12:06:04Z"    - apiVersion: networking.k8s.io/v1beta1      fieldsType: FieldsV1      fieldsV1:        f:metadata:          f:annotations:            .: {}            f:kubectl.kubernetes.io/last-applied-configuration: {}        f:spec:          f:rules: {}      manager: kubectl      operation: Update      time: "2022-06-02T12:06:04Z"    name: deprecated-ingress    namespace: test1    resourceVersion: "489457660"    selfLink: /apis/networking.k8s.io/v1/namespaces/test1/ingresses/deprecated-ingress    uid: c8c80e6f-3e72-45b7-aca7-d17ab4a49f19  spec:    rules:    - host: example.com      http:        paths:        - backend:            service:              name: test              port:                number: 80          path: /*          pathType: ImplementationSpecific  status:    loadBalancer: {}  

I also tried with CronJob and version batch/v1beta1 and I ended up with the batch/v1 version.

nginx redirect fastapi - invalid host in upstream

Posted: 02 Jun 2022 06:59 AM PDT

I have a fasapi and ui running on these two docker containers: enter image description here ds-ai-ocr-main_web_1 is the html ui and dis-ai-ocr-main_api_1 is the fastapi which is like this:

app=FastAPI()  @app.post("/extract_text")   async def create_upload_file(upload_file: UploadFile = File(...)):      return FileResponse(path="Outputs/ocr_output.zip", filename="{}".format(main.allinall(upload_file))+"_output.zip", media_type='application/zip')  

I buid my docker with the following command:

CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8020"]  

and here is my docker compose which integrates ui and fastapi:

version: "3.7"    services:         web:          build: ui          ports:            - 80:80          depends_on:            - api      api:          build: app          environment:            - PORT=80          ports:            - 8020:80            test:          build:                    context: ./            dockerfile: ./test/Dockerfile  

Now I want to redirect these to containers to another nginx service on port 8080 with this nginx.conf file:

user  nginx;  worker_processes  auto;    error_log  /var/log/nginx/error.log notice;  pid        /var/run/nginx.pid;    events {      worker_connections  1024;  }    http {        include mime.types;        sendfile on;      resolver 127.0.0.11;      upstream web {        server http://172.0.0.1;        }          server {            listen 80;            listen [::]:80;              location /ai-ocr/ {            proxy_pass http://web/;            }                  error_page 502 /502.html;          location = /502.html {          root /app/static/;          internal;         }      }  }  

The issue is that when I run the new nginx container with the following command:

 docker run --name nginx -v c:/Users/Documents/redirect/ds-nginx-conf-main:/etc/nginx -p 8080:8080 -d nginx  

I receive this error:

2022/06/01 20:41:11 [emerg] 1#1: invalid host in upstream "http://localhost" in /etc/nginx/nginx.conf:19    nginx: [emerg] invalid host in upstream "http://localhost" in /etc/nginx/nginx.conf:19  

Nginx multi-domain and multi web servers with one public IP

Posted: 02 Jun 2022 06:59 AM PDT

I have two web servers with different domain and only one public IP. I found that I can multi domain in the same IP and same server as bellow, but I would to open the website depending on the domain. I tried

server {      listen       80;      server_name  first.domain.com;      return 301 http://192.168.1.10;  }  

but this configuration change the url to http://192.168.1.10 ! I want to see the https://first.domain.com/request rather than http://192.168.1.10/request

Extract archive with tar but skip unchanged files

Posted: 02 Jun 2022 06:35 AM PDT

I have a nightly process that unarchives a roughly 40 gigabytes large tar.gz file like this:

tar -xzf latest-backup.tar.gz  

This step takes about 10 minutes, although often only a few files have changed inside the archive. I've seen that tar has some options to treat existing files, such as --skip-old-files:

--skip-old-files      don't replace existing files when extracting, silently skip over them  

Unfortunately, this also skips over files that changed and tar does not seem to support checking for file changes. Am I missing something, or is it really impossible to extract a large archive but only "apply the changes"?

bridge with bonding interface

Posted: 02 Jun 2022 05:55 AM PDT

I have some scenario like below

       [F/W]  [F/W]       eth1|      |eth3           +      +         [  CentOS7 ]           +      +       eth2|      |eth4         [B.B]  [B.B]  

Line1 = eth1, eth2 Line2 = eth3, eth4

It is a network redundancy configuration, one proxy server is inserted in the middle.

I try to make bonding with bond-ex(eth1, eth3), and bond-in(eth2, eth4)

and will make bridge br0(bond-ex, bond-in).

when i configure like this. Is communication on each line guaranteed?

For example..

traffic eth2 <-> eth1 is OK,

traffic eth2 <-> eth3 is not allow, until eth1 port is dead.

Is this Possible?

Thanks,

How to check script status after_script?

Posted: 02 Jun 2022 05:15 AM PDT

In the after_sctipt section, if the script section fails or success, still it runs as I keep allow_failure: true But, how to check in after_script that whether script section failed/success so that I can pass the same in some api call in the after_script

Site-to-Site VPN for overlapping cloud networks

Posted: 02 Jun 2022 05:02 AM PDT

I've got two cloud networks with following private addressing:

  1. 10.10.0.0/16
  2. 10.0.0.0/8

Need to establish IPsec site-to-site tunnel

I thought about NETMAP but is it possible to translate whole /8 network in this case?

When a VM Template was last used (to create a VM)?

Posted: 02 Jun 2022 05:53 AM PDT

I need to know when a VM Template was last used (to create a VM)? As I am working on a project of segregating unused templates in my vSphere environment, I need to know that answer.

I tried Get-Template TemplateName | select * But is not giving me the required information, Any help here is appreciated, Thank you!

Connect redis-cluster(running in docker) from host machine

Posted: 02 Jun 2022 04:58 AM PDT

I have used docker-compose with static ip to create redis-cluster, everything ran successfully but now I am stuck at how do I connect my host application to redis-cluster running in side docker.

version: '3'  services:      hdbrediscluster:      container_name: hdbrediscluster      image: redis:6.2.7-alpine      command: redis-cli --cluster create 172.20.0.10:6380 172.20.0.11:6381 172.20.0.12:6382 172.20.0.13:6383 172.20.0.14:6384 172.20.0.15:6385 --cluster-replicas 1 --cluster-yes      networks:        database:          ipv4_address: 172.20.0.9      ports:        - 6379:6379      depends_on:        - hdbredisnode1        - hdbredisnode2        - hdbredisnode3        - hdbredisnode4        - hdbredisnode5        - hdbredisnode6      hdbredisnode1:      container_name: hdbredisnode1      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node1:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node1.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.10      ports:        - 6380:6380      hdbredisnode2:      container_name: hdbredisnode2      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node2:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node2.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.11      ports:        - 6381:6381      hdbredisnode3:      container_name: hdbredisnode3      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node3:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node3.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.12      ports:        - 6382:6382      hdbredisnode4:      container_name: hdbredisnode4      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node4:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node4.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.13      ports:        - 6383:6383      hdbredisnode5:      container_name: hdbredisnode5      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node5:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node5.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.14      ports:        - 6384:6384      hdbredisnode6:      container_name: hdbredisnode6      image: redis:6.2.7-alpine      command: redis-server /usr/local/etc/redis/redis.conf      volumes:        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/node6:/var/lib/redis"        - "/Users/hiteshbaldaniya/Projects/Dockers/redis/config/node6.conf:/usr/local/etc/redis/redis.conf"      networks:        database:          ipv4_address: 172.20.0.15      ports:        - 6385:6385    networks:    database:      name: database      driver: bridge      ipam:        config:          - subnet: 172.20.0.0/16  

FYI, I am using java vertx redis client to connect. Please let me know the solution also I have assigned subnet random IP address. Also, I am working on macos. Docker version: 4.8.2 (79419)

file system error during bootup

Posted: 02 Jun 2022 03:37 AM PDT

I am getting below errors through console of the linux machine.

Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" to try again to boot into default mode. systemd-fsck[160090]: /dev/sda3: Inodes that were part of a corrupted orphan linked list found. systemd-fsck[160090]: /dev/sda3: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. systemd-fsck[160090]: (i.e., without -a or -p options) systemd-fsck[160108]: /dev/sda3 contains a file system with errors, check forced. systemd-fsck[160108]: /dev/sda3: Inodes that were part of a corrupted orphan linked list found. systemd-fsck[160108]: /dev/sda3: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. systemd-fsck[160108]: (i.e., without -a or -p options) systemd-fsck[160117]: /dev/sda3 contains a file system with errors, check forced.  

The machine runs a custom linux (genband / ribbon communication).

I am unable to boot the machine using any live distro.

Help solicited.

Thanks, ZAKI

Apache 2.4 : Redirect a subdomain to a new domain except one url the / (index.html)

Posted: 02 Jun 2022 03:09 AM PDT

I just changed the domain name, and i would redirect the following links :

  • All old urls from subdomain.example.com/url.html --> subdomain.newdomain.com/url.html

Except one url (the /) (implies index.html) but not specied by user in the browser :

  • The domain itself subdomain.example.com ---> subdomain.newdomain.com/newpage.html

How can i do that please?

this is what i tried (without success, except mentionned links) :

       ServerName subdomain.example.com           RewriteEngine on          RewriteRule ^/(.*) https://%{subdomain.newdomain.com}/$1 [NC,R=301,L]          RewriteCond "%{HTTP_HOST}"   "==subdomain.example.com"        RewriteRule ^/ https://subdomain.newdomain.com/newpage.html [R=301,L]           #<If "%{HTTP_HOST} == 'subdomain.example.com'">         #        Redirect "/" "https://subdomain.newdomain.com/newpage.html"         #</If>  

Thank you

ubuntu 20.04 - ChrootDirectory in sshd_config wont work with tokens %h or %u

Posted: 02 Jun 2022 06:41 AM PDT

I am trying to lock users into their home directory using a dedicated group in the sshd_config. The section of my group looks as follows

Match Group sftponly  ChrootDirectory %h  X11Forwarding no  AllowTcpForwarding no  ForceCommand internal-sftp  

Using %h or even /home/%u wont work when I try to connect with any user. I checked all permissions on their home directories and everything looks ok.

Interestingly, when I provide ChrootDirectory with a static path, everything works fine.

E.g the following config lets users connect (but in the wrong directory)

Match Group sftponly  ChrootDirectory /home/  X11Forwarding no  AllowTcpForwarding no  ForceCommand internal-sftp  

man sshd_config says that I am using the %h token correctly:

ChrootDirectory accepts the tokens %%, %h, %U, and %u.

Appreciate any hint since I spent hour on it already

Routing between 2 networks on Linux

Posted: 02 Jun 2022 03:06 AM PDT

My system topology:

On an ubuntu machine with 2 ethernet ports (eth0, eth1) i have connected another ubuntu machine as client and an OCRCamera (also client).

the requirement is that the main ubuntu machine will be DHCP server and router, so that the ubuntu-client and the camera will both get IP address from the main ubuntu machine. The ubuntu-client and the camera need to be able to ping/ssh one another.

with nmcli commands and configuration file in "/etc/dnsmasq.d/X", i have configured both eth0 and eth1 on the main ubuntu machine in a shared mode:

"/etc/dnsmasq.d/XXX" config file:

no-resolv  port=53  bogus-priv  strict-order  expand-hosts    domain=wombat.pixellot.com    # Set Listen address  listen-address=192.168.101.1  dhcp-range=set:group1,192.168.101.10,192.168.101.100,24h  dhcp-option=tag:group1,option:router,192.168.101.1  dhcp-option=tag:group1,option:dns-server,192.168.101.1  dhcp-option=tag:group1,option:netmask,255.255.255.0    listen-address=192.168.102.1  dhcp-range=set:group2,192.168.102.10,192.168.102.100,24h  dhcp-option=tag:group2,option:router,192.168.102.1  dhcp-option=tag:group2,option:dns-server,192.168.102.1  dhcp-option=tag:group2,option:netmask,255.255.255.0  

nmcli commands:

sudo nmcli connection add type ethernet ifname eth0 ipv4.method shared con-name EthCon0  sudo nmcli connection add type ethernet ifname eth1 ipv4.method shared con-name EthCon1    sudo nmcli connection modify EthCon0 ipv4.addresses 169.254.101.1/24  sudo nmcli connection modify EthCon1 ipv4.addresses 169.254.101.2/24    sudo nmcli connection up EthCon0  sudo nmcli connection up EthCon1  

This is how ifconfig on the main ubuntu machine looks like:

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.101.1  netmask 255.255.255.0  broadcast 192.168.101.255          inet6 fe80::c706:5a57:f51d:a8b0  prefixlen 64  scopeid 0x20<link>          ether 48:b0:2d:3b:6d:0b  txqueuelen 1000  (Ethernet)          RX packets 76802  bytes 6700303 (6.7 MB)          RX errors 0  dropped 8  overruns 0  frame 0          TX packets 73153  bytes 7426646 (7.4 MB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0          device interrupt 37      eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.102.1  netmask 255.255.255.0  broadcast 192.168.102.255          inet6 fe80::cf74:51de:1317:fe42  prefixlen 64  scopeid 0x20<link>          ether ae:aa:82:3c:08:6c  txqueuelen 1000  (Ethernet)          RX packets 41  bytes 4289 (4.2 KB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 187  bytes 29743 (29.7 KB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

The ubuntu client, and the camera connected to eth0 and eth1 respectively, got an ip addressed, and pinging is available:

pixellot@wombat:~$ sudo ping 192.168.101.98  PING 192.168.101.98 (192.168.101.98) 56(84) bytes of data.  64 bytes from 192.168.101.98: icmp_seq=1 ttl=64 time=0.581 ms  64 bytes from 192.168.101.98: icmp_seq=2 ttl=64 time=0.569 ms    pixellot@wombat:~$ sudo ping 192.168.102.32  PING 192.168.102.32 (192.168.102.32) 56(84) bytes of data.  64 bytes from 192.168.102.32: icmp_seq=1 ttl=64 time=0.451 ms  64 bytes from 192.168.102.32: icmp_seq=2 ttl=64 time=0.508 ms  

but, when i'm trying to ping from the ubuntu-client to the camera, it wont work:

yvesh@yvesh-XPS-15-9510:~$ ping 192.168.102.32  PING 192.168.102.32 (192.168.102.32) 56(84) bytes of data.  From 192.168.101.98 icmp_seq=1 Destination Host Unreachable  From 192.168.101.98 icmp_seq=2 Destination Host Unreachable  

How can i make both clients communicate with each-other? Is there any routing solution to this issue? (Not ssh-tunneling) i have tried many things but i am stuck real bad on it and cant develop further :-( Please help! <3

How to change mail.log save configurations to make gzips for one month?

Posted: 02 Jun 2022 07:14 AM PDT

I want to change the postfix or system configuration to have all the information of each month saved in mail.log, mail.err, and mail.info.

The system or postfix make new empty files after reaching some kb or maybe it has another form when the gz files are created.

How and where can I change that in system or postfix cfgs?

os is debian + standard postfix/dovecot cfgs.

Multiple nested HTML include directives with a nginx server

Posted: 02 Jun 2022 06:21 AM PDT

I have index.html:

<!--#include virtual="/includes/Framework.inc"-->  

Inside Framework.inc I have:

<!--#include file="/includes/HTML.inc"-->  

However, when I open the page for index.html, after viewing the source code, I see the nested include did not happen and see the full include directive:

<!--#include file="/includes/HTML.inc"-->  

How can we make sure nginx supports multiple levels of HTML includes?

Apache2 on Ubuntu EC2 goes down and does not restart

Posted: 02 Jun 2022 04:24 AM PDT

History:

We moved a Codeigniter 3 Installation from Bluehost to a T3.2xlarge. That single instance is hosting apache2 and a mysql server as a local database.

On Bluehost, the instance was running fine, migration was done since Bluehost itself had outages and we wanted more reliable hosting.


Error

Since the migration the Page is randomly going down completely. Trying to restart apache2 with:

sudo service apache2 restart  

Does not work, it requires a full reboot of the EC2 instance to get the service running again. After rebooting EC2, apache2 and mysql is running and the page is up without starting the services after the reboot of the instance.


Debug attempt 1

Since the page went down when database intense crons were run, I assumed the mysql server was the bottleneck. Migrating the full database into a serverless RDS should eliminate all database related bottlenecks. The same database intense crons are finishing now. To further eliminate the cron being the reason for the system going down, I cloned the EC2 and used the clone to run the cron while the original hosts the webpage the domain is pointing to.

However, random outages still persist.


Debug attempt 2

Assuming it is a memory issue, after checking phpinfi.php I saw that PHP had 128Mb of RAM ( on a 32Gb machine ), so just to see if more RAM helps:

  1. memory_limit set to 8192Mb
  2. reboot the EC2
  3. restart php7.4-fpm
  4. restart apache2

phpinfo confirmed the memory_limit is set to 8192M.

Random outages still persist.


Debug attempt 3

Checking the command:

sudo apache2ctl -t  

returns:

Syntax OK

Checking the command:

nano /var/log/apache2/error.log  

contains:

[mpm_worker:notice] AH00295: caught SIGTERM, shutting down

So I assume that Apache somehow is shutting down for some reason but unable to restart.

Checking the command:

sudo service apache2 restart  

does not throw errors

Checking the command:

sudo apache2ctl restart  

does not throw errors

Checking the command:

/usr/sbin/apache2 -V  

shows:

[core:warn] [pid 24560] AH00111: Config variable ${APACHE_RUN_DIR} is not defined

apache2: Syntax error on line 81 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be

a valid directory, absolute or relative to ServerRoot

Server version: Apache/2.4.41 (Ubuntu)

Server built: 2022-03-16T16:52:53

Server's Module Magic Number: 20120211:88

Server loaded: APR 1.6.5, APR-UTIL 1.6.1

Compiled using: APR 1.6.5, APR-UTIL 1.6.1

Architecture: 64-bit

Server MPM:

Server compiled with....

-D APR_HAS_SENDFILE

-D APR_HAS_MMAP

-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)

-D APR_USE_SYSVSEM_SERIALIZE

-D APR_USE_PTHREAD_SERIALIZE

-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT

-D APR_HAS_OTHER_CHILD

-D AP_HAVE_RELIABLE_PIPED_LOGS

-D DYNAMIC_MODULE_LIMIT=256

-D HTTPD_ROOT="/etc/apache2"

-D SUEXEC_BIN="/usr/lib/apache2/suexec"

-D DEFAULT_PIDLOG="/var/run/apache2.pid"

-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"

-D DEFAULT_ERRORLOG="logs/error_log"

-D AP_TYPES_CONFIG_FILE="mime.types"

-D SERVER_CONFIG_FILE="apache2.conf"

Where I can see 2 things:

  • there is a issue with ${APACHE_RUN_DIR}
  • Server MPM does not return a MPM

Checking the command:

 apache2 -l  

returns:

Compiled in modules:
core.c
mod_so.c
mod_watchdog.c
http_core.c
mod_log_config.c
mod_logio.c
mod_version.c
mod_unixd.c

What does not show a MPM module.

Checking the command:

apache2 -l  apache2ctl -l  

returns:

Compiled in modules:
core.c
mod_so.c
mod_watchdog.c
http_core.c
mod_log_config.c
mod_logio.c
mod_version.c
mod_unixd.c

Checking the command:

a2query -M  

returns:

worker


Question:

And this is where I have been stuck now. Is there anything else I can check or read from the debug attempt 3 to see why apache stops/does not restart and requires a full server reboot?

Authorization Header Missing Upon NGINX Proxy Pass to subdomain

Posted: 02 Jun 2022 03:35 AM PDT

Hi I'm running Laravel on NGINX server and I would like to use NGINX reverse proxy capability as an API gateway for my Laravel and other node API application. Here are my configurations:

Application URL: staging-app.example.com
Application API Endpoint: staging-app.example.com/api
API Gateway URL: api.example.com

What I want to do, is to redirect all API requests api.example.com/staging-app to staging-app.example.com/api. I have succeed in redirecting the API request, but somehow the Authorization header is not passed along to the proxy pass resulting in 401 unauthorized while other header do get passed along.

Here is my current api.example.com nginx config:

server {          server_name api.example.com;              location /staging-app {                  rewrite ^/staging-app/(.*)$ /$1 break;                   proxy_pass http://staging-app.example.com/;          }            location /test {                  rewrite ^/test/(.*)$ /$1 break;                  proxy_pass http://127.0.0.1:3333/;           }        listen [::]:443 ssl; # managed by Certbot      listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    }  server {      if ($host = api.example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot              listen 80;          listen [::]:80;            server_name api.example.com;      return 404; # managed by Certbot  }  

and for my laravel application, I use the configuration given from Laravel themselves

How to set up DNS on google domains to host tomcat websites on the internet

Posted: 02 Jun 2022 06:59 AM PDT

Excpectations / Target

  • We have a domain as (say) example.com bought on Google Domains and a PC running with windows 10 Pro
  • We intend to make this PC a server for hosting 2 of our web-apps app1 and app2 Currently we do not own a static IP address so lets refer the public address as: 192.0.2.0
  • Web-applications app1 and app2 are running on tomcat in separate app-bases and port 8081 & 8082
  • We want to run app1 and app2 on the subdomains app1.example.com and app2.example.com respectively.

Here are all the things that are working:

  1. the web-applications running on the separate app-base in tomcat (v9) on separate ports and are accessible locally and from intranet.
  2. the web-applications are also accessible from the internet with successful port forwarding (192.0.2.0:8081 and 192.0.2.0:8082 successfully load app1 and app2 respectively).

Problem: URL domain gets replaced with public IP address:

Now that the port forwarding was successful I tried domain forwarding (before reading much about how DNS configuration is supposed to be done).

This is how I did domain forwarding:

  1. I went in the website section (website_section_cropped_screen_shot.png), there clicked on Add a forwarding address(Add_a_forwarding_address_SS_cropped.png).
  2. Then in the resulting form, the filled the text-boxes labeled Forward From and Forward to with app1.example.com and 192.0.2.0:8081 respectively.

Now after doing this the address app1.example.com would redirect to app1 but the URL would would replace app1.example.com with 192.0.2.0:8081

Then, I read many articles and blogs telling to add an A type record or a CNAME type record but I could not understand how should I do it or what are the combination of records needed to make it work properly.

I tried the combination (in the Domain section):

Combination 1:

{ hostname=example.com, type=A, TTL=3600, Data:192.0.2.0 }  { hostname=app1.example.com, type=CNAME, TTL=3600, Data:192.0.2.0:8081 }  

Combination 2:

{ hostname=app1.example.com, type=TXT, TTL=3600, Data:192.0.2.0:8081 }  

But none worked and later it stopped making sense to me.

Please help me with this I do not have any experience in setting up DNS for a website and/or whatever else is needed to meet the above mentioned expectations.

UPDATE


Thanks to @fvu I was able to get started with nginx, configured it locally which is working and got the hang of how proxy server work(well the best case scenarios only)

Now there is another problem, I am not able to successfully open the port 80 on the server machine.

I tried and tested everything:

  • the nginx successfully starts and work on port 80 implying 80 not blocked by any other process.
  • all other ports are properly being port forwarded but the port 80 is not shown open checked on many port checker sites(mostly on portchecker.co)
  • to check what is happening I edited firewall settings to add rules and start logging ALLOWED as well as DENIED packets for all profiles(domain, public and private)
    • in the logs I saw that there were some requests for 443 port. I suspect if the request in port 80 is somehow being converted to 443 but do not know for sure. IS IT POSSIBLE?
    • there were many requests on port 80 that were allowed but still port 80 is shown to be closed on portchecker sites

Now I will either have to fix the port-80-not-opening issue or change the nginx port to something else than 80

But if the port is changed I will be needed to be mentioned somewhere like in the DNS records or so.. which is a bit unclear to me.

I was thinking maybe in router I could forward port 80 to 81 and have nginx run on 81 or so. but I need to try that out yet.

Meanwhile if you could tell me something about what is the way to manage reverse proxy using nginx running on port other than 80.

[AM IN HURRY SO MAY CONTAIN TYPOS OR LESS DETAIL. PLEASE ASK IF NEEDED!]

504 Gateway Time-out on NGINX Reverse Proxy even though the container is up

Posted: 02 Jun 2022 04:06 AM PDT

I have the following Docker setup:

  • jwilder/nginx-proxy for the reverse proxy

  • jrcs/letsencrypt-nginx-proxy-companion for SSL (Let's Encrypt)

  • custom WildFly container as the endpoint

My problem is that when visiting the website a 504 error gets thrown out. I give environment variables to the WildFly container containing multiple VIRTUAL_HOST, LETSENCRYPT_HOST and LETSENCRYPT_EMAIL. I tried exposing the ports but that did not help. Port 8080 gets shown in docker ps -a. The weight, max_fails etc is from a tutorial I found online because it wasn't working for me and I thought it would fix it. Using curl IP:8080 gives a successful response.

My Nginx config in the container:

# wildfly.example.com  upstream wildfly.example.com {                                  # Cannot connect to network of this container                                  server 172.17.0.5:8080 weight=100 max_fails=5 fail_timeout=5;  }  server {          server_name wildfly.example.com;          listen 80 ;          access_log /var/log/nginx/access.log vhost;          # Do not HTTPS redirect Let'sEncrypt ACME challenge          location /.well-known/acme-challenge/ {                  auth_basic off;                  allow all;                  root /usr/share/nginx/html;                  try_files $uri =404;                  break;          }          location / {                  return 301 https://$host$request_uri;          }  }  server {          server_name wildfly.example.com;          listen 443 ssl http2 ;          access_log /var/log/nginx/access.log vhost;          ssl_session_timeout 5m;          ssl_session_cache shared:SSL:50m;          ssl_session_tickets off;          ssl_certificate /etc/nginx/certs/wildfly.example.com.crt;          ssl_certificate_key /etc/nginx/certs/wildfly.example.com.key;          ssl_dhparam /etc/nginx/certs/wildfly.example.com.dhparam.pem;          ssl_stapling on;          ssl_stapling_verify on;          ssl_trusted_certificate /etc/nginx/certs/wildfly.example.com.chain.pem;          add_header Strict-Transport-Security "max-age=31536000" always;          include /etc/nginx/vhost.d/default;          location / {                  proxy_pass http://wildfly.example.com;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_set_header Host $server_addr:$server_port;                  proxy_set_header X-Real-IP $remote_addr;          }  }    

P.S the comment that it cannot connect to the network exists because it did not automatically detect the server and I had to manually edit the internal IP. My docker logs nginxcontainerid output:

2020/06/04 14:14:37 [error] 22247#22247: *6228 upstream timed out (110: Connection timed out) while connecting to upstream, client: IPHERE, server: wildfly.example.com, request: "GET / HTTP/2.0", upstream: "http://172.17.0.5:8080/", host: "wildfly.example.com"  

limit_except VS $request_method !~ ^(GET|HEAD|POST)$

Posted: 02 Jun 2022 05:42 AM PDT

I have been reading quite a bit about nginx lately and found 2 approaches online. The first appears to work at the server context level and the second is recommended for the location context level.

Question. Is it appropriate to use limit_except at the server context level?

Approach #1 ($request_method) embedded variable

# server context  #  # Disable unwanted HTTP methods  # Most of the time, you need just GET, HEAD & POST HTTP request in your web application.  # Allowing TRACE or DELETE is risky as it can allow Cross-Site Tracking attack and potentially  # allow an attacker to steal the cookie information.  # So we return a 405 Not Allowed if someone is trying to use TRACE, DELETE, PUT, OPTIONS.    if ($request_method !~ ^(GET|HEAD|POST)$ ) {      return 405;    }  

Approach #2 (limit_except) method

# Limits allowed HTTP methods inside a location.  . . .    location /restricted-write {        # location context        limit_except GET HEAD {            # limit_except context            allow 192.168.1.1/24;          deny all;      }  }  

iptables TRACE No chain/target/match by that name

Posted: 02 Jun 2022 07:06 AM PDT

I'm debugging the iptables for a kvm VM running a Buildroot image.When I try to set the following TRACE rule I get the error iptables: No chain/target/match by that name

sudo iptables -t raw -A OUTPUT -p tcp --destination 192.168.1.0/24 --dport 8443 -j TRACE  

If I instead enable the LOG rule it works, and the packets are logged, but I need to check which rule, if any, is dropping the packages.

Update: Information about the environment on which the problem occurs (inside the VM):

$ uname -a    Linux minikube 4.15.0 #1 SMP Sat Dec 8 00:26:02 UTC 2018 x86_64 GNU/Linux    $ cat /proc/version     Linux version 4.15.0 (jenkins@jenkins) (gcc version 7.3.0 (Buildroot 2018.05)) #1 SMP Sat Dec 8 00:26:02 UTC 2018  

Unable to increase IIS 7.5 upload limit (still 404 error)

Posted: 02 Jun 2022 04:06 AM PDT

I need to be able to upload large files to an ASP.NET application. I knew that IIS 7.5, by default, enforces a 30MB request limit. And I know that IIS throws a 404 error when you try to upload a file larger than the upload limit.

I have tried both to set a 500MB upload limit in my application's web.config and double-checked IIS console's Request Filter for the 500MB upload limit, successfully.

I still get a 404 error with a file large 23.2MB

I'm writing on SF because I believe it's not an application problem but a server configuration problem. What more can I check?

Internal corporate IIS 7.5 website not recognizing User's credentials - didn't used to ask for them

Posted: 02 Jun 2022 07:06 AM PDT

I'm a developer that's been asked to maintain the IIS configuration for the web app that we're building, so bear with me.

We have an internal website that is accessible to employees once they've logged into the Windows LAN. The servers are Windows Server 2008 R2, they are remotely managed by an external service provider, and they have Symantec Endpoint Protection installed on them.

I rebooted our test server and now the website is asking for user credentials to view the page. Unfortunately, when I enter a valid user name and password for the corporate domain, it's not accepted. The server can still be Remotely connected through Citrix Jumpservers using a user account and the corporate domain, so it seems to be specific to IIS.

The IIS permissions for the website are set to:

Anonymous Authentication - disabled  ASP.NET Impersonation - enabled  Forms Authentication - disabled  Windows Authentication - enabled  

I need to leave anonymous authentication disabled because of a flow through requirement in the system to Hummingbird's DM webservice.

This was working before the reboot and there were other issues going on with the company at the time (internet proxy spontaneously denied all internet webpages, and a samba share wasn't remounting for service account users). The code wasn't changed, and the IIS configuration wasn't changed (to my knowledge), so it seems like something on the network has changed, or maybe the service account.

This issue is likely going to get moved to corporate IT, but I need some sort of evidence that points to this being a network or security issue. Otherwise, they will simply dismiss it saying that it's our web app's fault or our IIS setup. Maybe it is, but I'm not sure what else to check and nothing was changed except rebooting the machine.

Are there any tests or tools that I can run to test the authentication configuration and isolate potential causes? Given that the server has Symantect Endpoint Protection on it, is there a setting that may have been changed that would cause this behaviour? Are there other settings in the Server Management Console that might cause this behaviour?

Manual Multicast forwarding with Linux router

Posted: 02 Jun 2022 05:04 AM PDT

I have a Linux router (Ubuntu). It is working well with unicast but with multicast routing/forwarding I have some trouble.

The problem is that my hosts do not send igmp/mld messages thus the router does not learn that there are interested parties on a link.

How can I manually configure the forwarding. So that multicasts coming to eth0 are forwarded out eth1.

I was trying to make it work with the following command: route add -net 224.0.0.0 netmask 240.0.0.0 eth0

But this seems only to be used for outgoing traffic.

I also tried out smcroute, but this daemon does not work on my Ubuntu.

Is it possible with iptables to do the forwarding? Or with this route add command?

Thx!!

How set owner of file cms.war for ftpuser and owner of cms folder for tomcat user?

Posted: 02 Jun 2022 06:01 AM PDT

I'm using Tomcat Server. I would like the owner of the file cms.war to be the ftp user the tomcat user to be the owner of the cms/ folder.

When I uploaded cms.war it was automatically deployed in cms/ folder and when I deleted cms.war the cms/ folder was deleted.

mail() sometimes not working, problem in sendmail parameter

Posted: 02 Jun 2022 06:01 AM PDT

The mail() function in php works strange these days.

<?php  mail("email@mail.com", "Subject", "Content");  ?>  

The above script works if I use "php script.php" in command line. However, if I link to the page(http://domain.com/script.php) by browser, the mail will not be sent even if the mail function returns true.

I googled about it and find a solution. It says "modify the php.ini file as following".

Change

sendmail_path = "/usr/sbin/sendmail -t -i"

to

sendmail_path = "/usr/sbin/sendmail -t"

And it works for me right now. Does anyone know why removing parameter -i can solve the problem? it goes well with -i in the past few months!!

User with Outlook receiving emails every hour regarding Microsoft Exchange offline address book 0X8004010F

Posted: 02 Jun 2022 07:22 AM PDT

About 6 months ago we upgraded our Exchange server from 2003 to 2010. I have read 0x8004010F when downloading Exchange offline address book and found that this issue is the same as what I am having.

I have also read the blog post mentioned the above question.

The problem is that I cannot use this method because the 2003 Exchange server has been decommissioned (properly).

Is there another solution so that this sync error can stop occurring?

Search and delete lines matching a pattern along with comments in previous line if any

Posted: 02 Jun 2022 05:04 AM PDT

I have a requirement to write a shell script in csh to search and delete lines matching a pattern along with comments in previous line if any. For example if my file has the following lines

  Shell script  #this is  a test  pattern1  format1  pattern2  format2  #format3  pattern3  

If the search pattern is "pattern" The output should be as follows

  Shell script  format1  format2  

To be more precise the lines which have the pattern and the previous line if it begins with "#" should be deleted

Thanks for the help

No comments:

Post a Comment