Monday, July 4, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Why is it multiple WSL distributions on my windows?

Posted: 03 Jul 2022 11:52 PM PDT

I run Windows 10 with WSL enabled. running command wsl -l -v results in the following output:

  NAME                   STATE           VERSION  * docker-desktop         Running         2    docker-desktop-data    Running         2  

I was stanned to witness two running WSL , why is it happened so ?(why not single instance) And also with strange name.

pseudo-terminal allocation not terminating ssh on remote container

Posted: 03 Jul 2022 09:59 PM PDT

I have a bash script integrated with circle ci which ssh into my remote container on Azure, pulls the code and restart the server. The script ssh into the container, pull the latest changes and restart the server but it is unable to exit from container which keeps the build running on circle ci.

I have a similar script on aws and it is working as expected. The only difference is I didn't had to allocated a pseudo-terminal terminal there.

I am guessing allocating a pseudo-terminal is what is keeping the circle ci build job from exiting.

I tried getting the process id and sending a kill signal but it didn't work either.

Here's what it looks like after the bash script is executed. Ssh from the circle ci got stuck on the azure machine and the build kept running. enter image description here

And here's a very basic version of what the bash script looks like that circle ci is executing

I added the -tt in ssh to force the pseudo-terminal allocation

#!/bin/sh    ssh -tt -o StrictHostKeyChecking=no azureuser@xx.xxx.xxx.xxx << EOT  cd /home/azureuser/pta-qa  git checkout .  git pull origin main  npm install  npm run build:prod || exit 1  pm2 restart pm2.json  echo "Successfully Deployed"  EOT    

It could be the OS as well since I didn't need to use the -tt flag on aws OS on aws where is Amazon Linux 2 and OS on azure is Ubuntu 20.04.4 LTS

IBM System X3500 M3 IMM 404

Posted: 03 Jul 2022 08:51 PM PDT

The IMM gives a 404 error when I put in the IP.

Things I've tried:

  • Reset the IMM.
  • Reset the IMM config. But these did not help.

What I'm Running:

I don't know if this will help at all.

  • Windows Server 2019 Standard
  • Intel(R) Xeon(R) CPU
  • 4.00 GB
  • 64-Bit OS, x64-Based CPU

How do I fix this?

Squid 5.2 Access Log-- HTTPS Connection Log Too Long! How to explain 100 million milliseconds

Posted: 03 Jul 2022 07:05 PM PDT

https through Squid 5.2 connect Azure. Access.log--elapsed time Too Long! How to explain 100 million milliseconds?

enter image description here

NGINX Force www and https at all times

Posted: 03 Jul 2022 06:55 PM PDT

I have recently discovered that there are some issues in my nginx vhost configuartion file, for instance I was told that at some occasions it downloads a php file, while for other instances it throws an SSL error.

I've done a little research and a few experiments but nothing really works the way I want it to. My end goal here is to have the website work on forced https + www at all times, so even if someone enters myurl.com or www.myurl.com they will always be redirected to https://www.myurl.com

Here is my current myurl.conf, which I believe looks overly complicated:

    server {          if ($host = www.myurl.com) {              return 301 https://$host$request_uri;          } # managed by Certbot            listen 80 default_server;          listen [::]:80 default_server;          root /var/www/html;          index index.php;          server_name myurl.com www.myurl.com;          return 301 https://www.myurl.com$request_uri;             add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload; always";          add_header X-Frame-Options SAMEORIGIN;          add_header X-Content-Type-Options nosniff;          add_header X-XSS-Protection "1; mode=block";          add_header 'Referrer-Policy' 'no-referrer';           add_header Expect-CT 'enforce; max-age=3600';          proxy_cookie_path ~(.*) "$1; SameSite=strict; secure; httponly";                    location / {          try_files $uri $uri.html $uri/ @extensionless-php;          }                    location ~ \.php$ {          include snippets/fastcgi-php.conf;          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;              fastcgi_hide_header             X-Powered-By;              fastcgi_hide_header             X-CF-Powered-By;          }            location @extensionless-php {          rewrite ^(.*)$ $1.php last;          }            location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|webp)$ {          expires 365d;          }            if ($allow_visit = no) {              return 403;          }                    if ($geoip_country_code = CN) {              return 403;          }                    if ($bad_referer) {           return 444;          }           }        server {          listen 443 ssl http2;          root /var/www/html;          index index.php;          server_name myurl.com;          return 301 https://www.myurl.com$request_uri;          add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload; always";          add_header X-Frame-Options SAMEORIGIN;          add_header X-Content-Type-Options nosniff;          add_header X-XSS-Protection "1; mode=block";          add_header 'Referrer-Policy' 'no-referrer';           add_header Expect-CT 'enforce; max-age=3600';          proxy_cookie_path ~(.*) "$1; SameSite=strict; secure; httponly";            location / {              try_files $uri $uri.html $uri/ @extensionless-php;          }            location ~ \.php$ {          include snippets/fastcgi-php.conf;          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;          }            location @extensionless-php {          rewrite ^(.*)$ $1.php last;          }            location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|webp)$ {          expires 365d;          }            if ($allow_visit = no) {              return 403;          }            if ($geoip_country_code = CN) {              return 403;          }                       if ($bad_referer) {           return 444;          }                               ssl_certificate /etc/letsencrypt/live/www.myurl.com/fullchain.pem; # managed by Certbot          ssl_certificate_key /etc/letsencrypt/live/www.myurl.com/privkey.pem; # managed by Certbot          ssl_dhparam /etc/nginx/ssl/dhparam.pem;        }        server {          listen 443 ssl http2;          root /var/www/html;          index index.php;          server_name www.myurl.com;          add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload; always";          add_header X-Frame-Options SAMEORIGIN;          add_header X-Content-Type-Options nosniff;          add_header X-XSS-Protection "1; mode=block";          add_header 'Referrer-Policy' 'no-referrer';           add_header Expect-CT 'enforce; max-age=3600';          proxy_cookie_path ~(.*) "$1; SameSite=strict; secure; httponly";            location / {              try_files $uri $uri.html $uri/ @extensionless-php;          }            location ~ \.php$ {          include snippets/fastcgi-php.conf;          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;          }                     location @extensionless-php {          rewrite ^(.*)$ $1.php last;          }            location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|webp)$ {          expires 365d;          }            if ($allow_visit = no) {              return 403;          }            if ($geoip_country_code = CN) {              return 403;          }                    if ($bad_referer) {           return 444;          }            ssl_certificate /etc/letsencrypt/live/www.myurl.com/fullchain.pem; # managed by Certbot          ssl_certificate_key /etc/letsencrypt/live/www.myurl.com/privkey.pem; # managed by Certbot          ssl_dhparam /etc/nginx/ssl/dhparam.pem;        }  

Some expert advise would be greatly appreciated, thank you

gpg certify key public key getting export along with sub-key public key

Posted: 03 Jul 2022 02:20 PM PDT

I am using gpg. My keyring structure is explained below.

I have a certify key under that I have

  • Encryption sub-key
  • Authentication Sub-key

In order to export the sub-key following steps are executed

step-1 This command will list all the public key. I will take the keyid of public key. For example encryption.

gpg --keyid-format long --with-fingerprint --list-keys  

I am using below command to export the subkey

gpg --export --armor --output public-key.asc <keyid>!  

But I inspect by using the below command. I can see that it is exported the public key of my certify also.

gpg --list-packets public-key.asc | grep "\(packet\|keyid\)"  

So my question are

Why it is exporting the public key of the certify?

Sharing the public key of the encryption key to the keyserver, will it share certify key public key? If yes, is there any security issue with this?

Ingress in GKE does not do the routing identically despite same IP at DNS level

Posted: 03 Jul 2022 06:50 PM PDT

I have setup in my GKE cluster an nginx ingress as follows:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx  helm install ingress-nginx ingress-nginx/ingress-nginx --namespace nginx-ingress  

A load balancer with its IP came up.

Now I added two DNS pointing to that domain at Cloudflare:

enter image description here

In addition I created a namespace app-a

kubectl create namespace app-a  kubectl label namespace app-a project=a  

and deployed an app there:

apiVersion: v1  kind: Service  metadata:    name: echo1    namespace: app-a  spec:    ports:    - port: 80      targetPort: 5678    selector:      app: echo1    type: ClusterIP  ---  apiVersion: apps/v1  kind: Deployment  metadata:    name: echo1    namespace: app-a  spec:    selector:      matchLabels:        app: echo1    replicas: 2    template:      metadata:        labels:          app: echo1      spec:        containers:        - name: echo1          image: hashicorp/http-echo          args:          - "-text=echo1"          ports:          - containerPort: 5678  ---  apiVersion: networking.k8s.io/v1  kind: Ingress  metadata:    name: echo-ingress-global    namespace: app-a    annotations:      kubernetes.io/ingress.class: "nginx"      nginx.ingress.kubernetes.io/ssl-redirect: "false"  spec:    rules:    - host: "test.my-domain.com"      http:          paths:          - pathType: Prefix            path: "/"            backend:              service:                name: echo1                port:                  number: 80  

Things look good in Lens, so I thought to test it out.

When I was entering eu1.my-domain.com, I get

enter image description here

which is intended, of course.

but when I entered test.my-domain.com, I get that the website is unreachable: DNS_PROBE_FINISHED_NXDOMAIN, although I expected to see the dummy output of the dummy app.

Even more strangely, no matter if I get the well-responding result or the non-responding one, in the logs of the nginx controller there is nothing showing up for any of the calls.

Can you help me, such that I can access the test.my-domain.com homepage?

How can i host a local website to domain using nginx?

Posted: 03 Jul 2022 09:32 PM PDT

I have a server that is running at 127.0.0.1:8323

This port is not reachable from outside.

I want to provide this ip address as https://example.com/website/index.php over the port 80

How can i do this using nginx?

I have tried using proxy_pass in a server, like :

server {  listen 80;  location website/ {  proxy_pass https://127.0.0.1:8323;   }  }  

But it returns 404

sshd misconfigured; only access is via website mPanel

Posted: 03 Jul 2022 10:42 PM PDT

EDIT 2 :

Due to the influence of a virus, a late-night session,
and very rusty knowledge of SSH, I made a silly mistake:
deleted the server's private keys from /etc/ssh/,
because I thought they didn't belong there.

The articles and documentation that I had recently read
were either too simplistic or too complex,
and I couldn't quickly discover the info that I needed,
so before turning off for the night I asked the question below.

Now that I've sorted it out, you can just skip to my answer.

I'm leaving the question here because the comments would not make sense without it.

===============================

Original question:

A remote server will not accept any kind of ssh connection.
The only access I have now is via the hosting provider's website.
The reason is that sshd keys are misconfigured.
(Due to silly mistakes; it was working fine for years, and I screwed it up yesterday.)

Although it has "Password Authentication yes", it no longer asks, and simply closes the connection.

Instructions for setting up SSH keys assume you have password access from a local terminal to send new keys to the server, but I don't have that.

With only mPanel web access, what is the easiest way to get sshd to accept passwords again?

Alternatively, is there a way to set up keys on the server and on the client without having to transfer files?
I doubt it, as the mpanel has no copy/paste, so I would have to type public keys by hand (doesn't seem practical).

Hope I can do it just by editing sshd config and/or hiding some files.

It's Centos7 with SSH 6.6 (compatible with SSH 9 on Manjaro at home).

Earlier, when key authentication failed, it asked for a password, but I hadn't used the pw for so long I couldn't find it for a while, and continued trying with changed key configurations (changing files via sftp in Filezilla).
After a few more botched login attempts it stopped asking for a password, and the filezilla connection broke too.
(Hadn't read the ssh docs for years, and forgot some important points.)

I wonder if a flag has been set somewhere, and sshd won't ask for a password until the flag is cleared...?

EDIT:

connection is closed immediately after 'KEXINIT sent'.
this is because the local and remote keys don't match.

debug1: SSH2_MSG_KEXINIT sent    Connection closed by ... (the server)  

The puzzle is "why does it not ask for a password?".
But i don't even care why, i just want to reset that sshd daemon,
and don't know enough atm to be confident about how.
I guess I'll restart that service soon; maybe change some config.
Am reading redhat EL7 docs, but it will take a while ... ;)

fail2ban is not installed. I did not try any pw at all with ssh, as it stopped asking before i found the pw; I haved used the pw in mpanel, it's correct.
What I'm hoping for is general info about how to safely and easily clear sshd, rather than diagnose what's gone wrong.
Will need to set it up to accept pw so I can then configure it with new keys. Am now searching for info on sshd server setup (most info is about client setup).

PCIe - Training error on device - Link degraded, macLinkWidth = x16, negotiatedLinkWidth = x8

Posted: 03 Jul 2022 05:02 PM PDT

i've placed a pcie raid adapter card with onboard ssd's (AORUS RAID ADAPTOR built in with 4 x PCIe 3.0 512GB NVMe SSD) in a new (2020) Dell (Optiplex 7080).

the system boots fine most times, but it tends not to find the drive during a soft reboot, for example. it's very temperamental on boot, but works fine once the OS is booted.

the built in boot diagnostics produce the following warning:

PCIe - Training error on device - Link degraded, macLinkWidth = x16, negotiatedLinkWidth = x8  

is this warning related to the boot device being not found sometimes or simply indicating the card is x8 and not x16?

if unrelated, what could be causing the system to not find the boot device at times? i've checked and reseated the drives on the adapter, and the adapter within the pcie card several times.

thanks for looking.

Why is IIS NOT reading/using the site's Web.config, while IIS Manager is correctly accessing the Web.config?

Posted: 03 Jul 2022 04:06 PM PDT

After "more time than was prudent" debugging an issue with handlers not being applied correctly, I've determined that the SiteRoot/web.config shown in IIS Manager is not actually used by IIS.

How do I know this? I've replaced Web.config with invalid XML - the site continues to run with default handlers and modules, while IIS Manager will, rightfully, throw an error on the invalid XML.

Information:

  • The test/invalid Web.config is not being read by IIS, or it would fail to parse.
    • Static content is being served, with a root relative to the Web.config.
  • The test/invalid Web.config is being read by IIS Manager, as it fails to parse/load (as expected).
    • Using "Explore" correctly opens up the folder the Web.config file exists in.
  • The NTFS case-sensitivity is disabled per this answer. The same issue persists with both web.config and Web.config casings.
  • The AppPool is running under a local account and the effective NTFS access has been verified.
  • There are no related Windows Application or System event logs indicating there was an error reading or parsing the configuration.

What might be occurring, and what further diagnosis can be done?

Block websites to a specific user in windows server 2016

Posted: 03 Jul 2022 02:02 PM PDT

In the company I'm in, they want to block a specific user to access some websites.

They have to login in the computer so my idea was to block using group policy management, but I dont find that anywhere. Other users can access those websites.

I have been searching for something that can help me, but I dont find anything. I usually don't work with windows server (and others), so I just know the basics.

Migrate user accounts from Azure AD to on-premise AD

Posted: 03 Jul 2022 10:03 PM PDT

I saw a few questions related to the situation we're in but not the answer I needed;

So we're mostly a cloud based company (G suite, Azure AD etc.) Azure AD has too many limitations, we're going to work with new HR soft that wants a full fledged AD.

So the environment has been setup with Azure AD, all the computers (Windows and mac) are using Azure AD credentials to logon.

Now we would like to create a hybrid environment, I already figured that we will need to rebuild the AD - export Azure AD and recreate on premises AD and then sync with azure AD - now for the questions;

If we recreate the on premises AD - and sync with Azure AD - will this cause the user accounts to be still recognized, or will all workstations need to be reconfiguration to open a new session under the user account and then transfer the data? Or would it re-associate the same session?

One time password change would not be the end of the world. But I remember from AD that if you delete an account, re-create an account with the same details it would be considered a new user, and it would create a new user folder and the old user session you would not be able to logon too. Would this be the same? and if so is there a work around?

dcdiag DNS test fails, but DNS seems to be working properly

Posted: 03 Jul 2022 09:02 PM PDT

Active Directory setup:

Single forest, 3 domains, with 1 domain controller each. All running server 2008 R2, with the same domain/forest functional level.

DNS clients are configured as follows:

DC1 -> DC2 (prim), DC1 (sec)

DC2 -> DC1 (prim), DC2 (sec)

DC3 -> DC1 (prim), DC3 (sec)

All zones are replicated throughout the entire forest, and each DNS server is set-up with 8.8.8.8/8.8.4.4 as forwarders.

Problem:

Everything appears to be working as should. AD is replicating properly, DNS is responsive and not causing any issues, BUT when I run dcdiag /test:dns, the enterprise DNS test fails on DC2 and DC3 with the following error:

TEST: Forwarders/Root hints (Forw) Error: All forwarders in the forwarder list are invalid.

Error: Both root hints and forwarders are not configured or

broken. Please make sure at least one of them works.

Symptoms:

Event viewer is constantly showing these 2 event ID's for DNS client:

ID 1017 - The DNS server's response to a query for name INTERNAL RECORD indicates that no records of the type queried are available, but could indicate that other records for the same name are present.

ID 1019 - There are currently no IPv6 DNS servers configured for any interface on this host. Please configure DNS server settings, or renew your dynamic IP settings. (strange, as IPv6 is disabled on the network card)

nslookup is working as expected, and finding any and all records appearing in ID 1017, no matter which DNS server I select to use.

While running dcdiag, the following events appear:

Event ID 10009: DCOM was unable to communicate with the computer 8.8.4.4 using any of the configured protocols.

DCOM was unable to communicate with the computer 8.8.8.8 using any of the configured protocols.

Event ID 1014: Name resolution for the name 1.0.0.127.in-addr.arpa timed out after none of the configured DNS servers responded.

I've run wireshark while dcdiag is running its test, and the internal DNS servers do resolve anything thrown at them, but then the server continues querying Google DNS and root hints.

What the hell is going on? What am I missing here?

Edit: The actual enterprise DNS test error messages are:

         Summary of test results for DNS servers used by the above domain         controllers:                DNS server: 128.63.2.53 (h.root-servers.net.)               1 test failure on this DNS server               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.63.2.53            DNS server: 128.8.10.90 (d.root-servers.net.)               1 test failure on this DNS server               PTR record query for the 1.0.0.127.in-addr.arpa. failed on the DNS server 128.8.10.90               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.8.10.90            DNS server: 192.112.36.4 (g.root-servers.net.)               1 test failure on this DNS server               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 192.112.36.4  

etc., etc.

HAProxy, how to disable logs for the stats endpoint

Posted: 03 Jul 2022 08:02 PM PDT

I am enabling stats with something similar to this configuration:

global    log /var/run/log local0 info    defaults    log global    listen stats    bind *:9090    stats enable    stats auth secret:pass    stats refresh 5s    stats show-legends    stats show-node    stats uri /stats  

They work but now I would like to know if there is a way to preventing the stats to emit logs, currently, In my logs, I have multiple lines like:

Connect from x.x.x.x:33970 to y.y.y.y:9090 (stats/HTTP)  

Any idea of how to prevent to log the stats requests?

I already tried in the listen stats definitions without success:

 http-request set-log-level silent  

SSH ForwardAgent is receiving "Connection closed by remote host"

Posted: 03 Jul 2022 06:01 PM PDT

I'm trying to connect to a remote server using SSH ForwardAgent but I'm facing all the time the same issue :

ssh_exchange_identification: Connection closed by remote host  

I've setup my ~/.ssh/config as follow :

#Proxy  Host my.remote.proxy.com  Hostname IPProxy.IP.IP.IP  User my.user  IdentityFile ~/.ssh/id_rsa  ForwardAgent yes    #Remote server  Host my.remote.server.com  Hostname IPRemote.IP.IP.IP  User my.user  IdentityFile ~/.ssh/id_rsa  ProxyCommand ssh my.user@my.remote.proxy.com  nc -w 10 %h %p 2> /dev/null  

I'm able to ssh correctly into my.remote.proxy.com. I'm able to ssh correctly from my.remote.proxy.com to my.remote.server.com using ssh (and the private on test purpose).

My problem is that I'm not able to ssh from my host to my.remote.server.com using forwardagent.

I've setup /etc/ssh/sshd_config for both my.remote.proxy.com and my.remote.server.com as follow :

AllowAgentForwarding yes  AllowTcpForwarding yes  #GatewayPorts no  X11Forwarding yes  

I checked and both server are using a working version of openssl with forwardagent :

openssh-7.4p1-13.el7_4.x86_64  openssh-clients-7.4p1-13.el7_4.x86_64  openssh-server-7.4p1-13.el7_4.x86_64  

The /var/log/secure from my.remote.proxy.com are returning :

Nov 15 04:39:07 [localhost] sshd[7866]: Accepted publickey for my.user  Nov 15 04:39:07 [localhost] sshd[7866]: pam_unix(sshd:session): session opened for user my.user by (uid=0)  Nov 15 04:39:08 [localhost] sshd[7869]: Received disconnect from IPPUBLIC port 61378:11: disconnected by user  Nov 15 04:39:08 [localhost] sshd[7869]: Disconnected from IPPUBLIC port 61378  Nov 15 04:39:08 [localhost] sshd[7866]: pam_unix(sshd:session): session closed for user my.user  

Nothing is showing up from /var/log/secure on my.remote.server.com.

Nginx redirecting every url to localhost

Posted: 03 Jul 2022 03:02 PM PDT

I have a Django website running with Nginx and Gunicorn. Everytime I call a url on the server, example website/url, it redirects to localhost/url. I have given the nginx settings in both nginx.conf and sites-available/site-name

nginx.conf:

user www-data;  worker_processes auto;  pid /run/nginx.pid;    events {      worker_connections 768;      # multi_accept on;  }    http {    ##  # Basic Settings  ##      client_max_body_size 5M;  sendfile on;  tcp_nopush on;  tcp_nodelay on;  keepalive_timeout 65;  types_hash_max_size 2048;  # server_tokens off;    # server_names_hash_bucket_size 64;  server_name_in_redirect off;    include /etc/nginx/mime.types;  default_type application/octet-stream;    ##  # SSL Settings  ##    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE  ssl_prefer_server_ciphers on;    ##  # Logging Settings  ##    access_log /var/log/nginx/access.log;  error_log /var/log/nginx/error.log;    ##  # Gzip Settings  ##    gzip on;  gzip_disable "msie6";    # gzip_vary on;  # gzip_proxied any;  # gzip_comp_level 6;  # gzip_buffers 16 8k;  # gzip_http_version 1.1;  # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;    ##  # Virtual Host Configs  ##    include /etc/nginx/conf.d/*.conf;  include /etc/nginx/sites-enabled/*;  

}

nginx/sites-available/site name

server {      listen 80;      server_name url;      server_name_in_redirect off;        access_log /var/log/nginx/access.log;      error_log /var/log/nginx/error.log;        location /static/ {          alias /opt/itc/iitb-tech/static/;      }        location / {              proxy_pass http://unix:/opt/itc/iitb-tech/itc_iitb/itc_iitb.sock;              proxy_set_header X-Forwarded-Host url.org;              proxy_set_header X-Real-IP $remote_addr;              add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';      }  }  

Django settings file

"""  Django settings for itc_iitb project.    Generated by 'django-admin startproject' using Django 1.8.7.    For more information on this file, see  https://docs.djangoproject.com/en/1.8/topics/settings/    For the full list of settings and their values, see  https://docs.djangoproject.com/en/1.8/ref/settings/  """    # Build paths inside the project like this: os.path.join(BASE_DIR,   ...)  import os   BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))      # Quick-start development settings - unsuitable for production  # See   https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/    # SECURITY WARNING: don't run with debug turned on in production!  DEBUG = False    ALLOWED_HOSTS = ['server ip','127.0.0.1','localhost']    ALLOWED_PORTS = ['*']  # Application definition    INSTALLED_APPS = (  'aero',  'erc',  'biotech',  'mnp',  'krittika',  'main',  'srg',  'ITSP2017',  'ec',  'django.contrib.admin',  'django.contrib.auth',  'django.contrib.contenttypes',  'django.contrib.sessions',  'django.contrib.messages',  'django.contrib.staticfiles',  )    MIDDLEWARE_CLASSES = (      'django.contrib.sessions.middleware.SessionMiddleware',      'django.middleware.common.CommonMiddleware',      'django.middleware.csrf.CsrfViewMiddleware',      'django.contrib.auth.middleware.AuthenticationMiddleware',      'django.contrib.auth.middleware.SessionAuthenticationMiddleware',      'django.contrib.messages.middleware.MessageMiddleware',      'django.middleware.clickjacking.XFrameOptionsMiddleware',      'django.middleware.security.SecurityMiddleware',  )  ROOT_URLCONF = 'itc_iitb.urls'    TEMPLATES = [  {      'BACKEND': 'django.template.backends.django.DjangoTemplates',      'DIRS': [],      'APP_DIRS': True,      'OPTIONS': {          'context_processors': [              'django.template.context_processors.debug',              'django.template.context_processors.request',              'django.contrib.auth.context_processors.auth',              'django.contrib.messages.context_processors.messages',          ],      },    },   ]     WSGI_APPLICATION = 'itc_iitb.wsgi.application'      # Database  # https://docs.djangoproject.com/en/1.8/ref/settings/#databases    DATABASES = {      'default': {          'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add   'postgres$          'NAME': 'dbname',                      # Or path to database    file$          # The following settings are not used with sqlite3:          'USER': 'user',          'PASSWORD': 'pwd',          'HOST': 'localhost',                      # Empty for   localhost thr$          'PORT': '',                      # Set to empty string for   default.      }  }    # Internationalization  # https://docs.djangoproject.com/en/1.8/topics/i18n/    LANGUAGE_CODE = 'en-us'    TIME_ZONE = 'UTC'    USE_I18N = True    USE_L10N = True    USE_TZ = True    STATIC_URL = '/static/'  STATIC_ROOT = "/opt/itc/iitb-tech/static/"  MEDIA_ROOT = BASE_DIR + '/ITSP2017/static/media/'  MEDIA_URL = '/static/media/'    APPEND_SLASH = True  

Apache 2.4 ErrorDocument for multiple subdomains

Posted: 03 Jul 2022 03:02 PM PDT

we're running a large project for different customers, each has its own subdomain. Apache should not execute any script if an invalid subdomain is used. Instead, an error page should be shown.

Working:

this is out zzz-default.conf which is the last VHOST and matches all queries that are not catched by another VHOST.

<VirtualHost *:80>          ServerName project.example.com          ServerAlias *.project.example.com          Redirect 404 /          DocumentRoot /var/www/html/          ErrorDocument 404 "This Subdomain does not exist."          ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  

What's not working:

ErrorDocument 404 /404.html  

This file is located in /var/www/html/ and contains pure html, no scripts.

Our problem seems to be the redirect rule, but we need this to match all subdomains and rewrite to /.

If I enable this and call an invalid subdomain, I get

Not Found

The requested URL / was not found on this server.

Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.

Anybody knows why?

Edit: The other VHOSTs are defined as

<VirtualHost *:80>      ServerName client.project.example.com      Header always append X-Frame-Options SAMEORIGIN  </VirtualHost>    Include /path/to/local/client/httpd-vufind.conf  

There are 13 VHOSTs defined like this and then the above zzz-default.conf is loaded.

how to increase Apache mod_proxy Jetty 5minute timeout

Posted: 03 Jul 2022 11:01 PM PDT

we use Apache and Jetty to do install components behind a firewall. Some actions take a while ( 10-15 minutes ). Apache is the proxy and Jetty is the proxy target on some machines. Everything works fine for actions taking less than 5 minutes. Actions taking longer than 5 minutes fail with a 502 proxy error.

I have seen some similar topics and the advice was to define timeout and keepalive - both did not help.

our setup ist:

Windows 2012R2 Apache 2.4.9 Jetty 7

Initially I forgot to mention that there is a firewall between the apache and the Jetty.

In apache httpd.conf we have:

ProxyPassMatch       ^/([a-z0-9\-]+)/(.+)$ http://$1:3000/$2       timeout=3000 ttl=3000 Keepalive=On  

We hoped that timeout=3000 ( 3000 seconds ) would keep Apache waiting for about 50 minutes for the response from Jetty. Keepalive and and ttl are trials ...

On Jetty we are calling a simple Groovy script that simply sits and waits for a long time. If the waittime is small this works as expected. If the waittime is beyond 5minutes we get an error:

Apache Access: ( the request starts at 17:25 )

xxx.xxx.xxx.xxx- - [02/Apr/2016:17:25:47 +0200] "GET /server/scripts/waitlong.groovy HTTP/1.1" 502 445 "-" 300509428 "-" "10.119.1.20" 10.119.1.20 3000  

As you can see the duration is about 5Minutes ~ 300509428 and thus a timeout - it should have lastet for 10 minutes.

Apache Error: ( the request times out at 17:30 )

[Sat Apr 02 17:30:47.815050 2016] [proxy_http:error] [pid 11656:tid 12736] (OS 10060)A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.  : [client 10.119.1.20:60466] AH01102: error reading status line from remote server w-bidb1hf:3000  [Sat Apr 02 17:30:47.815050 2016] [proxy:error] [pid 11656:tid 12736] [client 10.119.1.20:60466] AH00898: Error reading from remote server returned by /w-bidb1hf/scripts/waitlong.groovy  

Any ideas how to do to keep Apache waiting for a longer time ??

Outlook 2013 and Exchange 2016 "The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action"

Posted: 04 Jul 2022 12:05 AM PDT

I just installed and get everything to work with my new Exchange 2016 server, but when i adding it to Outlook 2013 i autodiscover setup works and it says restart outlook.

After restarted outlook i getting a message saying: "The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action." and after klicked OK i getting a window to check the mailbox name on the exchange server but nothing works.

After closeing that window the profile for the exchange server is gone.

I alrady have 2 mailbox accounts from another exchange server in outlook that works fine, not from same exchange server, this is a new fresh exchange server i trying to get all to work.

I have seen many people has this problem becouse some old .pst file and stuff but i have tried everything i finding and tried on 2 computers that has outlook with other exchange accounts working.

Do anyone got any tip what it can be?

Edit: I have now upgraded to Office 2016 and latest windows updates to see if updates doing anything but still getting same error, now with Outlook 2016 i get it when adding the account/profile, see screenshot:

Like you can see in this screenshot the autodsicover works and my webmail works but getting the error when adding the exchange account to Outlook 2016 Like you can see in this screenshot the autodsicover works and my webmail works but getting the error when adding the exchange account to Outlook 2016

Edit2: After some searching i find out where the problem is and created a new question for it: How to change external URL on MAPI over HTTP on Exchange 2016?

HTTP working HTTPS not working

Posted: 03 Jul 2022 05:02 PM PDT

I set up Comodo SSL on CentOS 6.7 and Apache/2.2.15 and we are running a CakePHP application on the server. When I go to http/domain its working but when I go to httpS/domain it says 404 Not Found (even though I see lock icon and https in green color).

Here's part of my etc/httpd/conf/httpd.conf file:

Include conf.d/*.conf   Listen 80  <VirtualHost *:80>  ServerAdmin info@domain  DocumentRoot /var/www/vhosts/domain/httpdocs/app/webroot/  ServerName domain  <Directory /var/www/vhosts/domain/httpdocs>      Allowoverride All  </Directory>  </VirtualHost>  

And /etc/httpd/conf.d/ssl.conf contains:

Listen 443  <VirtualHost *:443>  DocumentRoot /var/www/vhosts/domain/httpdocs/app/webroot/  ServerName domain  SSLEngine on  SSLCertificateFile /etc/httpd/ssl/domain.crt  SSLCertificateKeyFile /etc/httpd/ssl/private.key  SSLCertificateChainFile /etc/httpd/ssl/domain.ca-bundle  </VirtualHost>  

I've done httpd stop and start multiple times, and also tried httpS/domain:80 but that gives an error: This webpage is not available (ERR_CONNECTION_CLOSED).

Can the DocumentRoot be the same? If not then how do I manage that since copy/pasting our application code in another folder is not feasible.

Any thoughts on what i'm doing wrong?

How can I update an Exchange Server Monitoring Override?

Posted: 03 Jul 2022 10:03 PM PDT

Assume you have an Exchange 2013 server with existing Server Monitoring Overrides which you can get by running on the Exchange Shell:

Get-ServerMonitoringOverride -Server servername | ft -auto  

and the output shows something like:

Identity                                        ItemType PropertyName        PropertyValue  --------                                        -------- ------------        -------------  MailboxSpace\StorageLogicalDriveSpaceMonitor\G: Monitor  MonitoringThreshold 50000  MailboxSpace\StorageLogicalDriveSpaceMonitor\H: Monitor  MonitoringThreshold 50000  MailboxSpace\StorageLogicalDriveSpaceMonitor\L: Monitor  MonitoringThreshold 25000  

In the above example there are server overrides to prevent the default Exchange monitoring to rise an alert when a drive drops below the default 100GB limit.

And let's say that you want to change an existing override (for example the existing one has expired, or you want to change the PropertyValue of MonitoringThreshold to be 10000 instead).

How would you modify an existing ServerMonitorOverride in this instance?

php-fpm: locale settings change themselves

Posted: 03 Jul 2022 04:06 PM PDT

I experienced a bug with php-fpm : locale settings change themselves randomly.

Here are the correct locale settings:

Array  (      [decimal_point] => .      [thousands_sep] =>       [int_curr_symbol] =>       [currency_symbol] =>       [mon_decimal_point] =>       [mon_thousands_sep] =>       [positive_sign] =>       [negative_sign] =>       [int_frac_digits] => 127      [frac_digits] => 127      [p_cs_precedes] => 127      [p_sep_by_space] => 127      [n_cs_precedes] => 127      [n_sep_by_space] => 127      [p_sign_posn] => 127      [n_sign_posn] => 127      [grouping] => Array          (          )      [mon_grouping] => Array          (          )  )  

And here are the changed settings:

Array  (      [decimal_point] => ,      [thousands_sep] =>        [int_curr_symbol] => EUR       [currency_symbol] => €      [mon_decimal_point] => ,      [mon_thousands_sep] =>        [positive_sign] =>       [negative_sign] => -      [int_frac_digits] => 2      [frac_digits] => 2      [p_cs_precedes] => 0      [p_sep_by_space] => 1      [n_cs_precedes] => 0      [n_sep_by_space] => 1      [p_sign_posn] => 1      [n_sign_posn] => 1      [grouping] => Array          (              [0] => 3          )      [mon_grouping] => Array          (              [0] => 3          )  )  

The problem occurs randomly.

When removing php-fpm and using FastCGI, the problem doesn't occur anymore. How can I get this working with php-fpm ? The problem occurs on a shared hosting (we are the company which provides the hosting) and we really need php-fpm in order to use pools.

Thanks in advance!

EDIT : Today I discovered the problem occurs when we use the Ondemand Process Manager and not with the Static Process Manager.

Where is WSGI installed on Centos?

Posted: 03 Jul 2022 06:01 PM PDT

I am getting a permissions issue when running django in daemon mode. Reading here https://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_Sockets I think the solution is to configure the WSGISocketPrefix

The problem is that /var/run/wsgi is no where to be found on my centos server.

The closes thing I can find is: /etc/httpd/run/httpd.pid

How can I find where wsgi is installed?

Or what other value can I set the WSGISocketPrefix equal to?

How to set custom $_SERVER variable for PHP

Posted: 04 Jul 2022 12:05 AM PDT

I'm working on a PHP web app which ALSO has some command line tools. I need the command line tools to detect the environment so that they connect with the correct DB credentials etc. The web app does this easily by checking $_SERVER['SERVER_NAME'] but that doesn't work for a shell script.

I'd like to create my own $_SERVER variable that the shell script can check. Ex: $_SERVER['MYAPP_ENVIRONMENT']. How do I do this?

I found this solution, but I don't see the same files in /etc/apache2/. I also found this, but they're using .htaccess and I'm not sure if I have mod_env and also my app uses it's own .htaccess file, so it would have to be edited every time I deploy.

I'm on a Dreamhost VPS, which runs Ubuntu 12.04 LTS

Can windows credentials be stored for 'All Users'?

Posted: 03 Jul 2022 09:02 PM PDT

I am looking for a way to store windows credentials for 'All Users' as opposed to individually-named Users in Win7. Issue - we have a company server that is being accessed by multiple users. Each user logs on to the server with their unique user credentials. While working on the server, each user has need to access paid-for-services via a state (as in ND) web site. When they click on the web site link for these services, they are presented with a Windows Security challenge. All unique users enter a common set of credentials (same username & password) for access to the state server. The user only has to enter the state credentials once and they are good the rest of the day even as they log off and log back on to our company server. The kicker is that all individual user profiles are auto-deleted every night for business reasons. The users are wondering if there was some way the state credentials can be stored so that no matter what user logs on to the company server, the state credentials will always be available when they try to access the state's paid-for-services, without having to type them in every day.

Uanble to connect to SMTP server (IIS) externally

Posted: 03 Jul 2022 07:00 PM PDT

I've set up a SMTP server using IIS 6 on Windows Server 2008. I've set it up for "All Unassigned" IP adresses on port 25. I've also added 127.0.0.1 and the IP to the extrenal source in the "Relay". I've configured the Windows Firewall to accept port 25.

I am able to connect to smtp with telnet localy but not external from the IP I've added to the relay. I get the message: "Could not open connection to the host, on port 25: Connect failed"

A port scan shows that port 25 is open on the server.

Any idea what the issue might be, and how to fix?

Missing LV in VG in LVM partition on Ubuntu

Posted: 03 Jul 2022 07:00 PM PDT

After a power failure, Ubuntu 10.04 Server hard drive is no longer bootable. I tried using boot-repair but it couldn't locate an operating system.

I ran gdisk to verify where the lvm partition was and that it was still in tact. Here is the output:

GPT fdisk (gdisk) version 0.6.14    Partition table scan:    MBR: protective    BSD: not present    APM: not present    GPT: present    Found valid GPT with protective MBR; using GPT.    Command (? for help): p  Disk /dev/sdb: 3907029168 sectors, 1.8 TiB  Logical sector size: 512 bytes  Disk identifier (GUID): 3A0E99EE-74F9-41F5-81A0-7B7D7235DE8E  Partition table holds up to 128 entries  First usable sector is 34, last usable sector is 3907029134  Partitions will be aligned on 2048-sector boundaries  Total free space is 2157 sectors (1.1 MiB)    Number  Start (sector)    End (sector)  Size       Code  Name     1            2048            4095   1024.0 KiB  EF02       2            4096          503807   244.0 MiB   EF00       3          503808      3907028991   1.8 TiB     8E00      Command (? for help): i  Partition number (1-3): 3  Partition GUID code: E6D6D379-F507-44C2-A23C-238F2A3DF928 (Linux LVM)  Partition unique GUID: 4F35492A-C6DD-4E31-9D53-8C88A74A1B48  First sector: 503808 (at 246.0 MiB)  Last sector: 3907028991 (at 1.8 TiB)  Partition size: 3906525184 sectors (1.8 TiB)  Attribute flags: 0000000000000000  Partition name:   

So, it's still there and apparently in tact, so I went on to do vgscan:

:/# vgscan

Reading all physical volumes. This may take a while... Found volume group "ubuntu" using metadata type lvm2

So, I did :/# vgchange -ay ubuntu followed by :/# lvs and got:

LV     VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert    root   ubuntu -wi-ao   4.40g    swap_1 ubuntu -wi-a- 260.00m    

The thing is, there should be another VG in there almost 1.8TB in size but it isn't showing.

So.. is there any way to recover a LV that isn't showing for lvs? I need to recover 1 important file in there that was created after the last backup was made.

:/# vgdisplay

  --- Volume group ---    VG Name               ubuntu    System ID                 Format                lvm2    Metadata Areas        1    Metadata Sequence No  3    VG Access             read/write    VG Status             resizable    MAX LV                0    Cur LV                2    Open LV               0    Max PV                0    Cur PV                1    Act PV                1    VG Size               1.82 TiB    PE Size               4.00 MiB    Total PE              476870    Alloc PE / Size       1191 / 4.65 GiB    Free  PE / Size       475679 / 1.81 TiB    VG UUID               r3Z9Io-bWk7-i7wp-9QGZ-mF3o-ucQs-SdsaGW  

Nginx + Passenger : stop file uploads timing out after 30 seconds

Posted: 03 Jul 2022 08:02 PM PDT

I have a ruby app which runs under passenger and nginx. If i try to upload a largish file (eg 15+ meg), when it gets to 30s in, the upload restarts (according to chrome) and at the end of the next 30 seconds it gives up and i get a timeout.

Is there an option i can put in my nginx config to prevent this from happening? Here's what my current nginx config looks like:

worker_processes  1;    events {      worker_connections  1024;  }    http {      passenger_root /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2;      passenger_ruby /usr/local/bin/ruby;        include       mime.types;      default_type  application/octet-stream;        sendfile        on;      keepalive_timeout  65;        gzip  on;      gzip_min_length  1000;      gzip_proxied     expired no-cache no-store private auth;      gzip_types       text/plain application/xml text/css text/javascript application/x-javascript;      gzip_disable     "MSIE [1-6]\.";       server {        listen 80;        server_name alekskrotoski.com;        root /var/www/apps/akrotoski/public;   # <--- be sure to point to 'public'!        passenger_enabled on;     }  }  

I'm not an nginx expert and have a feeling this might be obvious, hope so anyway. I already tried adding

proxy_read_timeout: 600;   

to the server block but that didn't help.

Cheers, max

Apache 2 .htaccess matching all sub directories of current directory

Posted: 03 Jul 2022 11:01 PM PDT

I want to disable the php engine in all directories below the current one.

I have tried using <Directory> and <DirectoryMatch> but cannot find the correct regex syntax for matching just sub directories.

Example directory structure:

files/folder1/ files/folder2/ files/folder3/folder3a

I want to match folder1/, folder2/, folder3/ and folder3a/ but not files/

Any ideas?

No comments:

Post a Comment