Monday, December 6, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Can you run SAS drive from SAS backplane to SATA on mobo?

Posted: 06 Dec 2021 10:05 AM PST

Have an SAS drive pugged into SAS port on the backplane, and it is connected to SATA on the motherboard. Is it possible to use it this way? It is a testing server for cloning another SSD.

nginx force nginx to resolve dns - error 9#9 recv() (failed connection refused)

Posted: 06 Dec 2021 09:37 AM PST

I have a hostname like this: https://abc.xyz.com

I want to force resolve the dns everytime a request arrives, because if the dns of abc.xyz.com changes i want nginx to still be able to proxy to the site, therefore need to resolve the dns again.

Now i followed this post How to force nginx to resolve DNS (of a dynamic hostname) everytime when doing proxy_pass? and my configuration looks like this

server {      ...      resolver abc.xyz.com;      set $backend "http://abc.xyz.com";      proxy_pass $backend;      ...  }  

And i get this error [error] 9#9 recv() failed (111: connection refused) while resolving, resolver: <some-dns>:53

I don't understand what i'm doing wrong here, i guess it has something to do with the resolver. The only difference is that the guy in the post put dns there 127.0.0.1. I don't think i want to do that since dns/ip can change when i redploy my app and i saw you can give resolver a hostname but it doesn't work and i get this error.

Any help would be appreatiated!

Incorrect PTP timestamps

Posted: 06 Dec 2021 09:07 AM PST

I have trouble syncing to Linux systems using PTP.

Setup:

Two PCBs with a BegleCore module and a DP83640 PHY are connected with each other over Ethernet. One board should act as the PTP master, the other as the slave. Debian 10, Kernel: 4.19.94 Driver for the Phy is loaded. Using linuxptp v3.1

On the master system I run the command:

sudo ptp4l -i eth0 -f linuxptp/configs/configMaster.cfg -m  

On the client system I run:

sudo ptp4l -i eth0 -f linuxptp/configs/configslave.cfg -m  

Content of configMaster.cfg:

[global]  serverOnly       1  BMCA             noop  

Content of configSlave.cfg:

[global]  clientOnly      1  BMCA            noop  step_threshold  1  

This results in the following output on the slave:

ptp4l[438753.396]: selected /dev/ptp0 as PTP clock  ptp4l[438753.409]: port 1 (eth0): INITIALIZING to SLAVE on INIT_COMPLETE  ptp4l[438753.414]: port 0 (/var/run/ptp4l): INITIALIZING to LISTENING on INIT_COMPLETE  ptp4l[438753.418]: port 0 (/var/run/ptp4lro): INITIALIZING to LISTENING on INIT_COMPLETE  ptp4l[438754.075]: port 1 (eth0): new foreign master 304511.fffe.0ff048-1                                                                                         ptp4l[438758.074]: selected best master clock 304511.fffe.0ff048  ptp4l[438762.072]: master offset 2426120726467 s0 freq -261066 path delay     15040  ptp4l[438762.074]: selected best master clock 304511.fffe.0ff048  ptp4l[438765.074]: master offset 2426120697575 s1 freq -270698 path delay     15156  ptp4l[438767.072]: master offset 2426120678191 s0 freq -270698 path delay     15156  ptp4l[438768.075]: master offset 2426120668273 s1 freq -280618 path delay     15830  ptp4l[438769.072]: master offset 2426120658469 s0 freq -280618 path delay     15830  ptp4l[438770.073]: master offset 2426120648789 s0 freq -280618 path delay     16022  ptp4l[438771.076]: master offset 2426120639057 s1 freq -290350 path delay     16022  ...  

The reported offset is approximatly 40 min. Before running ptp4l I had set the PTP clocks in the PHYs with testptp -s to the current system time. The PTP clocks were therefore actually within a few seconds of each other.

Each time ptp4l reports a "master offset s1 ..." it does step the PTP clock back by 40 min (checked with testptp -g). Yet the reported offset only changes by about 10 us.

I also looked into the network traffic with Wireshark and saw, that the Follow-Up messages from the master contain a timestamp that is by about 69 min of what the PTP clock in the PHY is set to. After adding debug outputs to ptp4l I saw, that on the slave, the timestamp it extrats from the cmsgs returned from the socket is offset by about -27 min from what the client PTP clock actually is.

The difference from the false timestamps (+69 min) send by the master and the falsely read timestamps (-27 min) by the client results in the 40 min of assumed offset between the master clock and the client clock.

Dedicated Server For Social Media App [closed]

Posted: 06 Dec 2021 09:23 AM PST

I'm using Godaddy.com Established - Business Hosting:

  • Site Visitors: 1,500,000 est. monthly visits
  • RAM: 16 GB Dedicated
  • Storage: 240 GB Dedicated
  • MySQL Databases: Unlimited

So i don't like there support team & limitation they provide to control the site one of them that i can not backup my own database if it size more than 200MB until i pay for services to do that so i would like to move to difference hosting with minimum specification like below So What is you recommendation for this ?

Dedicated Server Hosting:

  • Intel Xeon-D 2123IT
  • 4C/8T – 3.0 GHz Turbo
  • 32 GB DDR4 RAM
  • 2 x 500 GB SSD Storage (RAID-1)

pg_dump is not the same version as psql

Posted: 06 Dec 2021 09:06 AM PST

I updated postgres to version 14.

psql --version  psql (PostgreSQL) 14.1 (Ubuntu 14.1-2.pgdg20.04+1)  

But pg_dump did not upgrade with everything else:

pg_dump --version  pg_dump (PostgreSQL) 12.9 (Ubuntu 12.9-2.pgdg20.04+1)  

Any idea why that is? If I completely uninstalled postgres to do a fresh install would I lose any local databases as well?

Update: I purged everything postgres and reinstalled postgresql-14:

pg_dump --version        Error: PostgreSQL version 12 is not installed  

Server Connection Issue - With OK CPU/RAM

Posted: 06 Dec 2021 08:00 AM PST

Recently we had an occurrence where we were unable to connect to mutliple masters on our Redis cluster.

Connections from our code base were timing out. We were also unable to SSH into the box during this period, essentially locking us out.

This has happened on multiple occassions and each time the CPU was around 20% and memory usage was also around 20%. The number of tcp connections varied during each event between 7k and 12k, well below what we would expect to be an alarming level.

Connections that were already established continued to function normally. Among those existing connections were our metrics exporters, so they were able to still collect metrics on connections/cpu etc.

The network in/out would slowly decline as existing connections died off, however new ones could not connect at all, as if they were refused by the server.

We have reviewed settings such as SOMAXCONN and available file descriptors, but have yet been able to determine the reason new connections could not be made, as there were no clear anomalies in any stats we reviewed prior to the occurrence.

The servers are running Amazon Linux 2 on x2gd.medium instance types on AWS.

The inability to log in via SSH, while the majority of the traffic was on another port seemed quite odd.

Does anyone have any ideas as to why connections could not be made, while all obvious metrics seemed OK?

nginx returns 404 when serving React from subdirectory

Posted: 06 Dec 2021 07:56 AM PST

I've started with a React application served from nginx root. Then I followed this guide to serve the application from a sub-directory (/app). As suggested here, I changed my nginx config from this:

location / {      root /usr/share/nginx/html;      try_files $uri $uri/ /index.html =404;             }  

to this:

location /app {      alias /usr/share/nginx/html;      try_files $uri $uri/ /app/index.html =404;  }  

This kinda works - I'm able to browse to https://localhost/app, and get all the static files as expected. My problem is with reloading URLs which are controlled by React Router (i.e. sending the nginx server a URL which includes parts after the /app). For example, before the change, both of the following worked:

  • https://localhost <-- works
  • https://localhost/auth/local <-- also works

The first simply returned the homepage. The second caused nginx to default to index.html, and React router took care to internally navigate to the correct page (/auth/local).

After the change, the first request works, but the 2nd fails with 404:

  • https://localhost/app <-- works
  • https://localhost/app/auth/local <-- fails with 404

By adding some debug headers, I see that after the change, in the 2nd case, nginx does not execute the block at all.

What am I doing wrong?

Azure VM - Unable to RDP [closed]

Posted: 06 Dec 2021 07:39 AM PST

I have an Azure VM on which I have enabled Hyper-V (Nested Virtualization), and I have 4 local VMs in in, one of which is a Domain Controller. Full disclosure, I know nothing about DCs, so what I am saying next may not make a lot of sense. The other three hyper-v machines are connected to it [???], but now the issue is that my main VM (the host) is not turning on. Like... When I try to RDP into it from the Portal, this is what I get:

enter image description here

I have tried many things (that honestly I am too tired to write), and nothing has worked. For the life of me, I cannot figure out what is going on. The RDP inbound port is enabled.

(On an unrelated incident) I've seem this behaviour when I had turned off the option to RDP into the VM from within the VM itself (from the Server Manager). I didn't know what to do back then, so I used to just delete it and redeploy. I cannot do that here (in the issue at hand), because I'd like to know if there's a solution to this. Like, running an Azure PS command from the Azure Portal to re-enable it, or some other work around.

WinRM Python File copier plugin : only appends, never overwrites?

Posted: 06 Dec 2021 07:30 AM PST

I am unable to change the behavior of the WinRM File Copier in Rundeck, it is a Python based plugin, launched from a RHEL 7.9 (Maipo) Linux Rundeck server that is connected to an Active Directory server.

When it copies an existing file from a Rundeck server (Linux, with Kerberos/Active Directory authentication) to Windows based targets, it appends its content. I am looking how to force overwriting of the target file instead. Until now, I have to delete the target file each time before a file is copied to the target.

Does someone know how to modify this behavior in order to overwrite the content, not append it ? I checked in several Rundeck settings without success.

Rundeck version : 3.2.1-20200113 WinRM Python File Copier plugin: 2.0.12 WinRM Node Executor Python plugin: 2.0.12

Python2 is the version used on the Linux RHEL 7.9 Rundeck server

Thanks

Question also posted on : https://superuser.com/questions/1690791/winrm-file-copier-plugin-only-appends-never-overwrites

Does Windows SSO work with ADFS 2016 for OIDC Web Application?

Posted: 06 Dec 2021 07:03 AM PST

Our web application uses OpenID-Connect (OIDC) Implicit Flow for user login with ADFS 2016. Login generally works, however users get login screen for user name and password.

Does Windows-Login / SSO (kerberos?) work with such setup so users don't get login screen but are automatically logged in with their windows login?

If so, what are requirements for SSO (kerberos?) to work for such setup? What would be first steps to trouble-shoot why login screen is shown?

I can't upload files (Apache)

Posted: 06 Dec 2021 06:56 AM PST

I have a Symfony app running on docker exposing the port 8000, then I proxy the requests from my domain to http://localhost:8000/ it works well but when I want to send files using multipart/form-data it returns a 500 error code without more information, if I make the request directly to http://my_ip:8000 it works fine so I think it is a proxy error.

I checked apache errors.log and I get this error

[http:error] [pid 140227] [client 190.55.60.91:4255] AH02429: Response header name 'PHP message' contains invalid characters, aborting request

This is my apache config:

<VirtualHost *:80>          ProxyPreserveHost On          ProxyRequests Off          ServerName mydomain.com          ServerAlias mydomain.com            SetEnv proxy-sendchunked 1            ProxyPass / http://localhost:8000/          ProxyPassReverse / http://localhost:8000/          SetEnv proxy-sendchunks 1          ProxyTimeout 1000    </VirtualHost>    <IfModule mod_ssl.c>           <VirtualHost *:443>                    ProxyPreserveHost On                  ProxyRequests Off                    ServerName my_domain.com                  ServerAlias my_domain.com                    ProxyTimeout 60                  ProxyPass / http://localhost:8000/                  ProxyPassReverse / http://localhost:8000/                    ErrorLog ${APACHE_LOG_DIR}/error.log                  CustomLog ${APACHE_LOG_DIR}/access.log combined                  SSLEngine on                    SSLCertificateFile      /etc/ssl/private/my_cert.crt                  SSLCertificateChainFile /etc/ssl/private/ca-bundle-client.crt                  SSLCertificateKeyFile /etc/ssl/private/my_cert.key                    <FilesMatch "\.(cgi|shtml|phtml|php)$">                                  SSLOptions +StdEnvVars                  </FilesMatch>                  <Directory /usr/lib/cgi-bin>                                  SSLOptions +StdEnvVars                  </Directory>            </VirtualHost>  </IfModule>    

I tried to add some properties like "SetEnv proxy-sendchunked 1" and "ProxyTimeout 1000" but it still not working.

using domain controller to allow remote vpn hosts access to certain parts of internal network

Posted: 06 Dec 2021 06:51 AM PST

I'm looking for a way to establish secure connections from remote users to an internal closed lan. I can already connect the remote machine to a samba domain controller through an openvpn 2.x client before login by using a scheduled task, so remote connection to the domain is solved.

What I would need now is to know if there is a way to have the domain controller tell a firewall that this or that machine belongs to the domain and have the firewall use this information to discriminate whether the host can access a different internal network. For example, I would have the openvpn server (10.0.0.2) give each user a reserved IP in the 10.0.0.x range, so that they can see the domain controller (10.0.0.3). Then the domain controller tells the firewall (10.0.0.1, gateway) whether the machines connected using those IP's are joined to the domain and are therefore safe to let into an internal network through another interface the firewall is connected to, for example 10.0.1.x. Until that condition is fulfilled, the users would only have access to the 10.0.0.x "lobby".

The idea is to prevent the remote user from simply using the vpn credentials and certificate on any machine (potentially unsafe machines that are running god knows what) to access the secure internal network. I already know about the LDAP authentication for openvpn, but as far as I know that only asks the domain controller whether x credentials are ok, and doesn't check if the machine is actually on the domain.

Does this exist? Is it even possible? Is it even necessary, or am I looking at this the wrong way and there's a much easier alternative?

Thanks in advance.

Slurm srun cannot allocate ressources for GPUs - Invalid generic resource specification

Posted: 06 Dec 2021 06:58 AM PST

I am able to launch a job on a GPU server the traditional way (using CPU and MEM as consumables):

~ srun -c 1 --mem 1M -w serverGpu1 hostname  serverGpu1  

but trying to use the GPUs will give an error:

~ srun -c 1 --mem 1M --gres=gpu:1 hostname  srun: error: Unable to allocate resources: Invalid generic resource (gres) specification  

I checked this question but it doesn't help in my case.

Slurm.conf

On all nodes

SlurmctldHost=vinz  SlurmctldHost=shiny  GresTypes=gpu  MpiDefault=none  ProctrackType=proctrack/cgroup  ReturnToService=1  SlurmctldPidFile=/media/Slurm/slurmctld.pid  SlurmctldPort=6817  SlurmdPidFile=/var/run/slurmd.pid  SlurmdPort=6818  SlurmdSpoolDir=/var/spool/slurmd  SlurmUser=slurm  StateSaveLocation=/media/Slurm  SwitchType=switch/none  TaskPlugin=task/cgroup    InactiveLimit=0  KillWait=30  MinJobAge=300  SlurmctldTimeout=120  SlurmdTimeout=300  Waittime=0  DefMemPerCPU=1  SchedulerType=sched/backfill  SelectType=select/cons_tres  SelectTypeParameters=CR_CPU_Memory  AccountingStorageType=accounting_storage/none  AccountingStoreJobComment=YES  ClusterName=cluster  JobCompLoc=/media/Slurm/job_completion.txt  JobCompType=jobcomp/filetxt  JobAcctGatherFrequency=30  JobAcctGatherType=jobacct_gather/cgroup  SlurmctldDebug=info  SlurmctldLogFile=/media/Slurm/slurmctld.log  SlurmdDebug=info  SlurmdLogFile=/var/log/slurm-llnl/slurmd.log  MaxArraySize=10001  NodeName=docker1 CPUs=144 Boards=1 RealMemory=300000 Sockets=4 CoresPerSocket=18 ThreadsPerCore=2 Weight=100 State=UNKNOWN  NodeName=serverGpu1 CPUs=96 RealMemory=550000 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 Gres=gpu:nvidia_tesla_t4:4 ThreadsPerCore=2 Weight=500 State=UNKNOWN    PartitionName=Cluster Nodes=docker1,serverGpu1 Default=YES MaxTime=INFINITE State=UP  

cgroup.conf

On all nodes

CgroupAutomount=yes   CgroupReleaseAgentDir="/etc/slurm-llnl/cgroup"     ConstrainCores=yes   ConstrainDevices=yes  ConstrainRAMSpace=yes  

gres.conf

Only on GPU servers

AutoDetect=nvml  

As for the log of the GPU server:

[2021-12-06T12:22:52.800] gpu/nvml: _get_system_gpu_list_nvml: 4 GPU system device(s) detected  [2021-12-06T12:22:52.801] CPU frequency setting not configured for this node  [2021-12-06T12:22:52.803] slurmd version 20.11.2 started  [2021-12-06T12:22:52.803] killing old slurmd[42176]  [2021-12-06T12:22:52.805] slurmd started on Mon, 06 Dec 2021 12:22:52 +0100  [2021-12-06T12:22:52.805] Slurmd shutdown completing  [2021-12-06T12:22:52.805] CPUs=96 Boards=1 Sockets=2 Cores=24 Threads=2 Memory=772654 TmpDisk=1798171 Uptime=8097222 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)  

I would like some guidance on how to resolve this issue, please.

Edits: As requested by @Gerald Schneider

~ sinfo -N -o "%N %G"  NODELIST GRES  docker1 (null)  serverGpu1 (null)  

OpenVPN Connect to Community server

Posted: 06 Dec 2021 06:40 AM PST

Just a quick question. Is it possible to use OpenVPN Connect v3 to connect to a Community version of an OpenVPN server (not OpenVPN Access server)?

How to keep cheks for task sequence as first step?

Posted: 06 Dec 2021 06:22 AM PST

In mdt task sequence as first step (before OS install), with system macaddress/machine serial number can we do API call and check task seqnce to continue or stop based on its value retruned from API?

Able to call API as post configuration step after OS install but not before it

"/run/nginx.pid" failed (2: No such file or directory) to [::]:80 failed (98: Address already in use, to [::]:443 failed 98: Address already in use

Posted: 06 Dec 2021 07:18 AM PST

Nginx makes this error :

2021/12/06 06:16:48 [error] 165951#165951: open() "/run/nginx.pid" failed (2: No such file or directory)  2021/12/06 06:17:04 [notice] 165971#165971: signal process started   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:443 failed (98: Address already in use)  2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:80 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to 0.0.0.0:443 failed (98: Address already in use)   2021/12/06 06:17:06 [emerg] 165985#165985: bind() to [::]:443 failed (98: Address already in use)  

And my website is not available.

I put in my website config nginx in /etc/nginx/sites-avaibles ipv6only=on

I run netstat -plant | grep 80 but just nginx listen on the port, so doesn't solve the problem.

Windows 2012r2 domain environment and NTLM logins

Posted: 06 Dec 2021 06:47 AM PST

I would disable all NTLM in my domain environment, but before that I enabled on domain controller NTLM auditing, and I see some events 8004 with my local domain users and computers in these events description. All my clients have Windows 10 installed, so why NTLM is still used in my environment, because it should be used Kerberos as default?

How to install packages from command line on Suse

Posted: 06 Dec 2021 08:32 AM PST

What is the Suse version of apt-get or yum? Or how do I get one of them installed in order to install software packages from the command line?

A fairly intense session of googling suggests that it may be yast or yast2, but no sensible HOWTO of listing and installing packages from the command line seems to exist. (maybe I am looking in the wrong place)

If I am an administrator for a remote Suse server, how do I install packages from the command line? (Not using a GUI and preferably installing from a central repo)

Error response from daemon: {“message”:“No such container: kubelet”}

Posted: 06 Dec 2021 09:06 AM PST

When adding a new node to a Kubernetes cluster I end up with this error :

+ docker start kubelet  Error response from daemon: {"message":"No such container: kubelet"}  Error: failed to start containers: kubelet  + sleep 2  

This error occurs on a cluster that is already damaged. There is only one node left out of 3. The node remaining a priori a problem of certificate recovery and distribution. SSL is no longer functional on this node. For information, the Kubernetes cluster has been deployed through Rancher. The etcd container regularly reboots on node 3, and etcd does not want to deploy to the other nodes that I am trying to re-integrate into the cluster.

Kubelet is launched in a Docker container, itself launched by Rancher when he created the Kubernetes cluster. Regarding the tests carried out, I relaunched a new docker container with etcd, I tried to start again from a snapshot ... nothing allows to relaunch the cluster. Adding a new node is also not functional. From what I've seen, there is also an issue with ssl certificates created by Rancher that he cannot find

Nginx TCP PROXY forward Client IP

Posted: 06 Dec 2021 09:06 AM PST

i use nginx as tcp reverse proxy. clients only show me the proxy ip in backend. but i need real user ip. i try to include proxy_params but its dont work.

nginx conf:

user www-data;  worker_processes auto;  pid /run/nginx.pid;  include /etc/nginx/modules-enabled/*.conf;    events {      worker_connections 1024;      use epoll;      multi_accept on;  }  stream{      include /etc/nginx/tcp-proxies/*.proxy;  }  

xxx.proxy:

server{  listen 11111;  proxy_pass xxx.xxx.xxx.xxx:33333;  }  

what is to do to include proxy_params for showing real client ip in backend?

regards.

Unable to connect to openvpn with saved password "auth-users-pass"

Posted: 06 Dec 2021 07:07 AM PST

I'm trying to save the username, and password of my openvpn client in '.secret.txt', and I'm receiving errors when attempting to connect, or the password is requested, instead of being read from '.secret.txt'

Here is my config file:

resolv-retry infinite  nobind  persist-key  persist-tun  key-direction 1  remote-cert-tls server  tls-version-min 1.2  verify-x509-name server_4EBX2EpXPZasiTv1 name  cipher AES-256-CBC  auth SHA256  comp-lzo  verb 3  <ca>  auth-user-pass //root/.secret.txt  

When connecting, i'm receiving errors:

WARNING: file '//root//secret.txt' is group or others accessible

IIS 10 (MS Server 2016) FTPs - failed to retrieve directory listing

Posted: 06 Dec 2021 08:03 AM PST

I'm trying to get Microsoft Server 2016's IIS 10 to run FTPS. I have it working internally (need to change the External IP Address of Firewall to match internal IP (for LAN) and external IP (for WAN), but it works.)

When I try to connect using FileZilla from outside the LAN, I receive a "Failed to retrieve directory listing" I have ports 989/990 TCP, and 5000-5005 forwarding to the server using my Verizon FiOS NAT router.

I also have Windows Firewall set to accept in/out bound 5000-5005 (wasn't sure if it was needed), and to allow 989/990 in. I'm also attempting to use my MacBook Pro from outside my LAN. Using Finder, it prompts me for credentials (which wouldn't happen if it was completely rejected.) It tries to enter passive mode (11,22,33,44,237,36) which I think means on port 60708?

Any ideas?

Missing or skipped ICMP_Seq In AIX Ping

Posted: 06 Dec 2021 08:03 AM PST

I noticed an intermittent slow response on the ping from my application server to my database server. So to diagnose, I tested using a standard ping and Telnet tests.

For the telnet tests, it was able to connect. However some telnet took longer to connect than others.

For the ping, I noticed that I am missing a lot of icmp_seq. It was skipping occasionally. On the other server, it was communicating very sequentially.

But for the app to web and app to app, there were no skipped items. From the app to DB there is.

Anyone have any idea on what is happening ?

Just to add, the database response was very intermittent as well. Sometimes slow. The application server and database are on different network segments as well.

What can we look at ?

Nginx+gunicorn 404

Posted: 06 Dec 2021 07:07 AM PST

supervisorctl says that gunicorn procces have a RUNNING state and I thought that it is success. But something is wrong yet. Resource available only by IP

Nginx config:

upstream hello_app_server {    server unix:/var/www/aqe-backend/gunicorn.sock fail_timeout=0;  }    server {        listen   80;      server_name 188.166.200.51;        client_max_body_size 4G;        access_log /var/www/aqe-backend/logs/nginx-access.log;      error_log /var/www/aqe-backend/logs/nginx-error.log;        location /static/ {          alias   /var/www/aqe-backend/static/;      }        location /media/ {          alias   /var/www/aqe-backend/media/;      }        location / {          # an HTTP header important enough to have its own Wikipedia entry:          #   http://en.wikipedia.org/wiki/X-Forwarded-For          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            # enable this if and only if you use HTTPS, this helps Rack          # set the proper protocol for doing redirects:          # proxy_set_header X-Forwarded-Proto https;            # pass the Host: header from the client right along so redirects          # can be set properly within the Rack application          proxy_set_header Host $http_host;            # we don't want nginx trying to do something clever with          # redirects, we set the Host: header above already.          proxy_redirect off;            # set "proxy_buffering off" *only* for Rainbows! when doing          # Comet/long-poll stuff.  It's also safe to set if you're          # using only serving fast clients with Unicorn + nginx.          # Otherwise you _want_ nginx to buffer responses to slow          # clients, really.          # proxy_buffering off;            # Try to serve static files from nginx, no point in making an          # *application* server like Unicorn/Rainbows! serve static files.          if (!-f $request_filename) {              proxy_pass http://188.166.200.51;              break;          }      }        error_page 500 502 503 504 /500.html;      location = /500.html {          root /var/www/aqe-backend/static/;      }  }  

gunicorn script:

#!/bin/bash    NAME="aqe"                                  # Name of the application  DJANGODIR=/var/www/aqe-backend/             # Django project directory  SOCKFILE=/var/www/aqe-backend/gunicorn.sock  # we will communicte using this unix socket  USER=www-data                                        # the user to run as  GROUP=www-data                                     # the group to run as  NUM_WORKERS=3                                     # how many worker processes should Gunicorn spawn  DJANGO_SETTINGS_MODULE=project.settings_prod             # which settings file should Django use  DJANGO_WSGI_MODULE=project.wsgi                     # WSGI module name    echo "Starting $NAME as `whoami`"    # Activate the virtual environment  cd $DJANGODIR  source env/bin/activate  export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE  export PYTHONPATH=$DJANGODIR:$PYTHONPATH    # Create the run directory if it doesn't exist  RUNDIR=$(dirname $SOCKFILE)  test -d $RUNDIR || mkdir -p $RUNDIR    # Start your Django Unicorn  # Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)  exec gunicorn ${DJANGO_WSGI_MODULE}:application \    --name $NAME \    --workers $NUM_WORKERS \    --user=$USER --group=$GROUP \    --bind=unix:$SOCKFILE \    --log-level=debug \    --log-file=-  

Join Azure VM to Azure AD

Posted: 06 Dec 2021 07:38 AM PST

Using Microsoft Azure I have a default Active Directory domain (apparently) and I can create VMs. To my surprise, such VMs are not joined to the AD domain automatically and domain users can't log into it.

Is it possible to join these Azure VMs to the Azure Default AD? How or why not?

Thanks!

Rewriting main domain to subdomain (mod_rewrite)

Posted: 06 Dec 2021 06:52 AM PST

So I'm trying to write a mod_rewrite rule that will send everything on my main domain to a subdomain.

For example, redirect

http://example.com/1/2/3/4/5?n=6&i=7  

to

http://sub.example.com/1/2/3/4/5?n=6&i=7  

Here's what I have so far:

RewriteEngine On  RewriteCond ^http://www\.example.com\/ [NC]  RewriteRule ^(.*)$ http://sub.example.com/$1 [R=301,L]  

But it doesn't seem to working. Any tips?

How to alter the global broadcast address (255.255.255.255) behavior on Windows?

Posted: 06 Dec 2021 06:20 AM PST

Desired behavior

When an application sends a packet to the global broadcast IP address 255.255.255.255, I would like that the packet be sent to the Ethernet global broadcast address (ff:ff:ff:ff:ff:ff), on all interfaces.

On Linux and probably other OSes as well this seems to work. Windows XP and Windows 7 exhibit different behaviors about this, and neither behaviour is desirable for my situation.

Windows XP behavior

The packet will be sent correctly to the first network interface (interface order is specified in "Network Connections/Advanced/Advanced Settings"). It will also be sent to the other interfaces.

Everything is right so far. Problem is, when sending to the other interfaces, the source address of the broadcast packet is the IP address of the first interface. For example, imagine this network configuration (order is important):

  • Adapter 1: IP address 192.168.0.1
  • Adapter 2: IP address 10.0.0.1
  • Adapter 3: IP address 172.17.0.1

Now if I send a broadcast packet, the following packets will be sent (with source and destination IP addresses):

  • On adapter 1: 192.168.0.1 => 255.255.255.255

  • On adapter 2: 192.168.0.1 => 255.255.255.255

  • On adapter 3: 192.168.0.1 => 255.255.255.255

    In practice, applications using broadcast packets won't work on any interfaces other than adapter 1. In my opinion, this is a blatant bug in the TCP/IP stack of Windows XP.

Windows 7 behavior

Modifying the network interface order doesn't seem to have any effect on Windows 7. Instead, broadcast seems to be controlled by the IP route table.

IPv4 Route Table  ===========================================================================  Active Routes:  Network Destination        Netmask          Gateway       Interface  Metric            0.0.0.0          0.0.0.0   10.202.254.254       10.202.1.2    286            0.0.0.0          0.0.0.0      192.168.0.1      192.168.0.3     10         10.202.0.0      255.255.0.0         On-link        10.202.1.2    286         10.202.1.2  255.255.255.255         On-link        10.202.1.2    286     10.202.255.255  255.255.255.255         On-link        10.202.1.2    286          127.0.0.0        255.0.0.0         On-link         127.0.0.1    306          127.0.0.1  255.255.255.255         On-link         127.0.0.1    306    127.255.255.255  255.255.255.255         On-link         127.0.0.1    306        192.168.0.0    255.255.255.0         On-link       192.168.0.3    266        192.168.0.3  255.255.255.255         On-link       192.168.0.3    266      192.168.0.255  255.255.255.255         On-link       192.168.0.3    266          224.0.0.0        240.0.0.0         On-link         127.0.0.1    306          224.0.0.0        240.0.0.0         On-link       192.168.0.3    266          224.0.0.0        240.0.0.0         On-link        10.202.1.2    286    255.255.255.255  255.255.255.255         On-link         127.0.0.1    306    255.255.255.255  255.255.255.255         On-link       192.168.0.3    266    255.255.255.255  255.255.255.255         On-link        10.202.1.2    286  ===========================================================================  

See the 255.255.255.255 routes? Yep, they control broadcast packets. In this situation, broadcast packets will be send via the 192.168.0.3 because it has the lower metric... but not to the other interfaces.

You can change the interface through which global broadcast packets will be sent very easily (just add a persistent 255.255.255.255 route with a low metric). But no matter how hard you try, broadcast packets will only be sent on only one interface, not all of them like I'd like it to do.

Conclusion

  • Windows 7 only sends broadcast packets to one interface. You can choose which one, but that's not the point here.
  • Windows XP sends broadcast packets to all interfaces, but it only sends them as expected to one interface, which in practice is equivalent to the Windows 7 behavior.

The goal

I want to change this global IP broadcast support in Windows (preferably Windows 7) once and for all. Of course the better way would be to have some kind of supported configuration change (registry hack or similar), but I'm open to all suggestions.

Any ideas?

No comments:

Post a Comment