Sunday, July 4, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Linux host routing VLAN traffic cannot receive or make network connections

Posted: 04 Jul 2021 09:33 PM PDT

I have a CentOS 7 system that has three interfaces. I recently converted one on those interfaces to using VLANs. This system servers as a firewall and router between the various networks. Everything seemed to be working as expected, except one item. I cannot connect into or out of the CentOS system from/to any hosts on the VLANs.

For example, if I ssh from 192.168.32.95 (a host on VLAN 32) to 192.168.32.1 (VLAN 32 interface on the CentOS host), I can see the packets with tcpdump arrive on the enp3s0 interface, tagged with vlan 32. The iptables INPUT chain logs and ACCEPTS the packet. After that, the packet is gone. No SYN-ACK sent, no RST sent.

Similarly, for something like NTP from 192.168.32.1 to 192.168.32.3, the firewall logs and ACCEPTs the outgoing NTP packet, but then no packet is seen by tcpdump leaving on enp3s0 (i.e. enp3s0.32)

Pings from VLAN 32 work...I can ping the CentOS host from VLAN 32 and get a reply.

The CentOS system is forwarding packets between all combinations of the VLAN and non-VLAN interfaces.

Any thoughts on what could account for these connection issue, or even a way to further track down what is happening?

# ip addr show  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever  2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      link/ether c0:25:e9:0e:cb:a1 brd ff:ff:ff:ff:ff:ff      inet 10.20.30.177/28 brd 10.20.30.191 scope global enp1s0         valid_lft forever preferred_lft forever  3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      link/ether 84:16:f9:05:3a:fb brd ff:ff:ff:ff:ff:ff      inet 192.168.16.1/24 brd 192.168.16.255 scope global enp2s0         valid_lft forever preferred_lft forever  4: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      link/ether 6c:f0:49:42:5b:fa brd ff:ff:ff:ff:ff:ff  5: enp3s0.32@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether 6c:f0:49:42:5b:fa brd ff:ff:ff:ff:ff:ff      inet 192.168.32.1/24 brd 192.168.32.255 scope global enp3s0.32         valid_lft forever preferred_lft forever  6: enp3s0.50@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether 6c:f0:49:42:5b:fa brd ff:ff:ff:ff:ff:ff      inet 192.168.50.1/24 brd 192.168.50.255 scope global enp3s0.50         valid_lft forever preferred_lft forever  

Here is the INPUT chain where I put a logging and ACCEPT rule right at the beginning.

# iptables -t filter -S INPUT  -P INPUT DROP  -A INPUT -i lo -j ACCEPT  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -s 192.168.32.95/32 -d 192.168.32.1/32 -p tcp -m tcp --dport 22 -j LOG --log-prefix "TEST IN: "  -A INPUT -s 192.168.32.95/32 -d 192.168.32.1/32 -p tcp -m tcp --dport 22 -j ACCEPT  -A INPUT -m state --state INVALID -j INVALID  -A INPUT -m state --state UNTRACKED -j UNTRACKED  -A INPUT -i enp1s0 -j INPUT-EXT  -A INPUT -i enp2s0 -j INPUT-MID  -A INPUT -i enp3s0.32 -j INPUT-V32  -A INPUT -i enp3s0.50 -j INPUT-V50  -A INPUT -j LOG --log-prefix "DROP EOC INPUT: " --log-tcp-options --log-ip-options  -A INPUT -j DROP  

Firewall logs the SYN packet.

Jul  4 14:21:49 outside kernel: [112380.489332] TEST IN: IN=enp3s0.32 OUT= MAC=6c:f0:49:42:5b:fa:52:54:00:35:f4:e4:08:00:45:00:00:3c SRC=192.168.32.95 DST=192.168.32.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=17731 DF PROTO=TCP SPT=51926 DPT=22 WINDOW=29200 RES=0x00 SYN URGP=0  

Output of tcpdump shows multiple copies of the SYN packet arriving.

# tcpdump -e -nn -i enp3s0 host 192.168.32.95 and host 192.168.32.1  14:21:49.516122 52:54:00:35:f4:e4 > 6c:f0:49:42:5b:fa, ethertype 802.1Q (0x8100), length 78: vlan 32, p 0, ethertype IPv4, 192.168.32.95.51926 > 192.168.32.1.22: Flags [S], seq 3631520692, win 29200, options [mss 1460,sackOK,TS val 69303507 ecr 0,nop,wscale 7], length 0  14:21:50.529043 52:54:00:35:f4:e4 > 6c:f0:49:42:5b:fa, ethertype 802.1Q (0x8100), length 78: vlan 32, p 0, ethertype IPv4, 192.168.32.95.51926 > 192.168.32.1.22: Flags [S], seq 3631520692, win 29200, options [mss 1460,sackOK,TS val 69303760 ecr 0,nop,wscale 7], length 0  14:21:52.543926 52:54:00:35:f4:e4 > 6c:f0:49:42:5b:fa, ethertype 802.1Q (0x8100), length 78: vlan 32, p 0, ethertype IPv4, 192.168.32.95.51926 > 192.168.32.1.22: Flags [S], seq 3631520692, win 29200, options [mss 1460,sackOK,TS val 69304264 ecr 0,nop,wscale 7], length 0  14:21:56.607938 52:54:00:35:f4:e4 > 6c:f0:49:42:5b:fa, ethertype 802.1Q (0x8100), length 78: vlan 32, p 0, ethertype IPv4, 192.168.32.95.51926 > 192.168.32.1.22: Flags [S], seq 3631520692, win 29200, options [mss 1460,sackOK,TS val 69305280 ecr 0,nop,wscale 7], length 0  14:22:04.799936 52:54:00:35:f4:e4 > 6c:f0:49:42:5b:fa, ethertype 802.1Q (0x8100), length 78: vlan 32, p 0, ethertype IPv4, 192.168.32.95.51926 > 192.168.32.1.22: Flags [S], seq 3631520692, win 29200, options [mss 1460,sackOK,TS val 69307328 ecr 0,nop,wscale 7], length 0  

How to Implement Rate Limiting in Azure Web Application Firewall(WAF)?

Posted: 04 Jul 2021 09:21 PM PDT

I am looking to implement global rate limiting to Azure WAF. I have created custom rate limiting rules but they are IP based. I know Azure DDoS protection provides a certain coverage limit. But my goal is to have a maximum limit of HTTP requests that I can serve before my application gets unstable or infrastructure cost goes too high. I do have things like maximum no. of K8 pods and their CPU limits defined, but this is an additional step to make sure everything is fine.
I feel this should be a common use case, but I haven't been able to find a solution.
I have contacted Azure support and they told that they dont have an option for global rate limiting. The only viable solution that I have found so far is using NGINX Ingress controller explained in https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57. But this creates rate limits per NGINX Ingress pods and I have to have a new memcached pod to achieve global rate limiting with NGINX.
I am looking for possible solutions to achieve this. Thanks in advance

How can I remove a self signed root certificate?

Posted: 04 Jul 2021 09:15 PM PDT

I'm using bitnami wordpress on GCP. I'm new to server networking.

There is this root signed certificate that I can't changed. I have not installed any SSL on the server. I've searched through the bitnami wordpress documentation but can't find any mention about this.

I've tried to delete the certificates I found on the server in the directory and generate a temp SSL but still it shows insecure connection example.com It is literally example.com

Is this the default self signed certificate for apache? How can I remove it and install the SSL for actual domain name?

It is literally example.com

Kubernetes: how to limit root disk space for pod

Posted: 04 Jul 2021 08:23 PM PDT

I have pod deployed on a node with 100gb volume. I only want to pod to have 50gb root disk space

so I have such config in deployment yaml

         resources:              requests:                ephemeral-storage: "50G"              limits:                ephemeral-storage: "70G"  

But when I checked the container (there is only 1 container in the pod). I saw all the disk space on the node was allocated to the pod. Read from https://stackoverflow.com/a/58103991/3908406, I thought ephemeral-storage is one controls how much disk space allocated to the pod

# df -h  Filesystem      Size  Used Avail Use% Mounted on  overlay         100G  6.5G   94G   7% /  tmpfs            64M     0   64M   0% /dev  tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup  /dev/nvme0n1p1  100G  6.5G   94G   7% /etc/hosts  shm              64M     0   64M   0% /dev/shm  tmpfs           1.9G     0  1.9G   0% /proc/acpi  tmpfs           1.9G     0  1.9G   0% /sys/firmware  

Any idea what I did wrong?

HP ProLiant ML360 G10 server wont start

Posted: 04 Jul 2021 06:34 PM PDT

I have a HP ProLiant ML360 G10 server I have been working on for the past couple of days with no problem. Today I tried to boot it and its not working. I noticed on the front panel that all 4 lights on the front are blinking nonstop. Server has two power supplies and both of them are lighting green. I've tried running the system on either PS and both. Also the rear UID is blinking blue. The server does not react to the power on button.

According to the manual the amount of times the 4 lights flash will pinpoint the problem but the lights are flashing continuously.

Below is from manual:

  1. System board
  2. Processor
  3. Memory
  4. Riser board PCIe slots
  5. FlexibleLOM
  6. Removable HPE Flexible Smart Arraycontroller/Smart SAS HBA controller
  7. System board PCIe slots
  8. Power backplane or storage backplane
  9. Power supply

Was hoping someone can point me in the right direction before I start an RMA with HP

Thanks

Does it matter to partition something before dd?

Posted: 04 Jul 2021 06:48 PM PDT

Normally, to burn an ISO to a disk, let's say an ISO for installing Ubuntu, we dd to the /dev/sda, not to the partition like /dev/sda1, so in this case, where the target file is /dev/sda which is presumably the entire disk, does it make any sense to partition first? Wouldn't the iso contain the partitions?

Netcat on MacOS failing with simple commands

Posted: 04 Jul 2021 05:22 PM PDT

I'n trying to follow a tutorial here.

As an example we can say that we need a friend of ours to help us with something on our work system(Windows). To be able to receive a connection from us first our friend would need to set up a listener. nc -lvnp 4444

On my system, nc -nlvp 4444 fails with nc: missing port with option -l. I'm not sure why this is, as I've seen multiple tutorials with the same commands and it works fine. Is this a MacOS issue, or is it between keyboard and chair?

When does task scheduler close and not close processes?

Posted: 04 Jul 2021 04:04 PM PDT

Is it a rule that when a process exits with a non zero exit code that task scheduler won't close the process at it's completion?

I hadn't seen this documented or discussed anywhere but just seems to be what I have observed, or maybe there is some other rule that governs when a process is closed.

And do any of the task settings like "Stop the task if it runs longer than" affect this behavior? I.e. does such a suspended process get stopped eventually?

OHV VPS ports not modified. Where are those rule set?

Posted: 04 Jul 2021 06:17 PM PDT

I have a problem with a VPS ports config (on ovh.com).

  • I set up Debian 10 and updated it.
  • I didn't install any firewall software.
  • OVH infrastructure is protected by a global firewall, but on my panel it's shown as disabled and there are no rules applied.
  • Default ports work fine (HTTP, TCP, UDP, SSH, FTP, DNS and SSL).

EXAMPLE (FTP)

FTP works just fine on port 21:

client

$ ftp XX.XX.XX.XX  Connected to XX.XX.XX.XX.  220 (vsFTPd 3.0.3)  Name (XX.XX.XX.XX:f): NAME  331 Please specify the password.  Password:  230 Login successful.  Remote system type is UNIX.  Using binary mode to transfer files.  ftp>  

If I go to yougetsignal.com and it shows Port 21 is open and Port 2121 is closed.

Then I change the listening port to 2121, and it doesn't connect any more

server

# sudo nano /etc/vsftpd.conf  # changes done: "listening_port=2121"  # sudo service vsftpd restart    $ sudo netstat -tnlp | grep :2121  tcp6   0   0   :::2121   :::*   LISTEN   28582/vsftpd    #rules allow all in  $ sudo iptables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT    $ sudo ip6tables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT  

client

$ ftp  ftp> open XX.XX.XX.XX  ftp: connect: Connection refused  ftp> open XX.XX.XX.XX:2121  ftp: connect: Connection refused  

If I go to yougetsignal.com and it shows Port 21 is closed and Port 2121 is closed.

Same happens for any other port.

Where else can be those rules be defined, since any change I make on ip6tables or iptables seems to do nothing?

UPDATE

As commented by Michael, it was space, not colon.

client

ftp> open XX.XX.XX.XX 2121  Connected to 51.222.30.108.  220 (vsFTPd 3.0.3)  

On the other hand, the external test still throw "Closed", and the command commented by vidarlo, netcat -lp 2121, returns Can't grab 0.0.0.0:2121 with bind.

How Do I Prevent User Profile Removal @ Sign in Screen : Chrome Enterprise

Posted: 04 Jul 2021 02:36 PM PDT

Chromebook with Enterprise Upgrade:

On the sign in screen I can select a drop down arrow next to a user name and "remove the account" from the Chromebook EVEN THOUGH I'M NOT SIGNED IN.

Please tell me I'm missing an Admin Panel security setting that can override this totally insane default.

POTENTIAL SCENARIO: Go to Starbucks. Order coffee. Sign out. Go to men's room, and my 8 year old (or some joker) blasts my user account from the Chromebook without even being signed in to the box! What? Really? No big deal you say, it's a cloud first device. Well, I have a VPN client that just got blasted. Now I'm screwed. I'm locked out of the corporate network. There are likely other horrible scenarios but like I said, I'm new to this platform. You get the point.

How to restore LVM snapshot of larger volume on smaller disk?

Posted: 04 Jul 2021 04:00 PM PDT

I am trying to backup/restore a LVM volume on a local Ubuntu 20.04 server. The production server has a LV of 500G, however there are only 19G used so far.

The local dev server has 24G free space where I intend to restore the 19G from production. I do use -L 24G as a parameter while doing the LVM snapshot. The process failes to restore with the message: "no space left on device":

Production Server:

sudo lvcreate -s /dev/vg0/test -n backup_test -L 24G  sudo dd if=/dev/vg0/backup_test  | lz4 > test_lvm.ddimg.lz4    1048576000+0 records in  1048576000+0 records out  536870912000 bytes (537 GB, 500 GiB) copied, 967.79 s, 555 MB/s    sudo lvdisplay /dev/vg0/backup_test    --- Logical volume ---    LV Path                /dev/vg0/backup_test    LV Name                backup_test    VG Name                vg0    LV UUID                IsGBmM-VM7C-2sO4-VrC1-kHKg-EzcR-4Hej44    LV Write Access        read/write    LV Creation host, time leo, 2021-07-04 12:45:45 +0200    LV snapshot status     active destination for m360    LV Status              available    # open                 0    LV Size                500.00 GiB    Current LE             128000    COW-table size         24.00 GiB    COW-table LE           6144    Allocated to snapshot  0.01%    Snapshot chunk size    4.00 KiB    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           253:10  

Local Test Server:

sudo lvcreate -n restore -L 24.5G data  sudo mkfs.ext4 /dev/data/restore  sudo lz4 -d test_lvm.ddimg.lz4 | sudo dd of=/dev/data/restore     Warning : using stdout as default output. Do not rely on this behavior: use explicit `-c` instead !   dd: writing to '/dev/data/restore': No space left on device  51380225+0 records in  51380224+0 records out  26306674688 bytes (26 GB, 24 GiB) copied, 1181.48 s, 22.3 MB/s  

Is there a way to restore the 500G volume containing only 19G on a smaller disk then 500G?

Multipart text/html email from Invision Community not parsing correctly

Posted: 04 Jul 2021 09:43 PM PDT

First of all, yes I have contacted Invision Community support. Yes, I am a paying user with an active license. We went back and forth for several hours and all they have to say is there's nothing wrong with their software, and I need to fix my server.

I'm using CentOS 7.9.2009, PHP 8.0.7, Apache 2.4.6, and Exim 4.94.2.

When Invision Communtiy sends emails the From: value says Apache or Root instead of the name of the forum, and the html part of the email just shows code instead of having a text part and an html part.

I also have php code that I wrote myself using the mail() function on the same server using the same version of php and everything that sends perfect multipart emails.

Comparing the source of an email sent by Invision and an email sent by my code I noticed that the From: header and the Content-Type: headers on the Invision email are indented in by one space, and those same headers on the emails sent by my code don't have any whitespace in front of them. Those headers seem to be ignored by the mail client. Here's what the source looks like:

MIME-Version: 1.0   From: =?UTF-8?B?SGVsaW9OZXQ=?= <admin@example.com>   Date: Tue, 29 Jun 2021 19:04:36 +0000   Auto-Submitted: auto-generated   Content-Type: multipart/alternative; boundary="--==_mimepart_3d7970817fa277e018f1936f5865d582"; charset=UTF-8   Content-Transfer-Encoding: 8bit  Message-Id: <E1lyJ2C-00028k-9y@example.com>  

Notice the single space in front of From, Date, Auto-Submitted, and Content-Type.

Here's what those headers look like on an email that displays correctly from the same server:

MIME-Version: 1.0  From: "Example" <support@example.com>  List-Unsubscribe: <mailto:unsubscribe@example.com>, <https://www.example.com/unsubscribe/?c=3d9795e2646d156972cdf58655c758bd  Content-type: multipart/alternative;boundary=helio60db72e0c0005  Message-Id: <E1lyJJA-00036U-Px@example.com>  Date: Tue, 29 Jun 2021 19:22:08 +0000  X-AntiAbuse: This header was added to track abuse, please include it with any abuse report  

First of all, would these spaces cause the issues I'm seeing?

Second, if the spaces are the issue how do I go about fixing them?

Digging around in Invision's code it looks like they use PHP Pear Mail package https://pear.php.net/package/Mail Obviously it's probably a bad idea to edit Invision's code or Pear's code because any changes would likely be overwritten the next time I update. Are there any server settings I could change to fix this? Let me know if you need additional information and I'll edit in. Thanks!

HP Proliant ML350 Gen9 doesn't start

Posted: 04 Jul 2021 06:17 PM PDT

We have a HP Proliant ML350 G9 server and yesterday it just stopped suddenly. Checked the power supply and it's stable 220 VAC. Server has two power supplies and both of them are lighting green. Also UID is blinking blue. On the front panel all 4 LEDs are blinking simultaneously and server doesn't start. There is only one green blinking led on the motherboard. Fans are also not working.

Im sure that power supplies are ok, what can be a problem then?

PowerDNS - SOA serial mismatch on slaves

Posted: 04 Jul 2021 08:18 PM PDT

I am using PowerDNS with the PostgreSQL backend on three different servers, in native mode, all replication is done at the PostgreSQL backend. No server is set as master, pdns.conf is almost in default settings.

When I create a new zone using the PowerDNS API on the first server, I can see the zone and records in all three databases. And when using dig SOA example.com @SERVER.IP I receive a response from all servers, but with different SOA serials:

The first server which is the "primary" server and configured as such in the SOA record, has the SOA record 2020081503, the two others 2020081505.

When making changes to the zone, the serial is increasing, but the offset of 2 stays the same.

All servers are configured the same, except that the first one has the API and web server enabled in pdns.conf, and that it is mentioned in the SOA record.

Any idea where the offset comes from and how it can be synced?

Thank you!

How to add CPU core licenses (I've already purchased) to Windows Server Standard 2019?

Posted: 04 Jul 2021 09:44 PM PDT

I have purchased a 24-core server (Dual 12 core CPU's). I installed Windows Server Standard 2019, which comes by default with 16 cores.

I want to use the additional 8 CPU cores, so I purchased two additional 4-core licenses for Windows Server 2019 Standard. When I received them, there are no instructions on how to activate the addition 8 cores from the two four core licenses. Cumulatively I've now spent about 8 hours digging through menus, googling, digging down through the Microsoft website and have come up empty. I tried to call Microsoft support but spent hours and did not manage to get through to a human on Server 2019.

It must be so simple. Can anyone please tell me how I can activate the additional 8 cores now that I have purchased the official Microsoft four core licenses from NewEgg (reputable source)?

Task Scheduler not responding when attempting to create a new trigger on all Windows 2008 R2 servers

Posted: 04 Jul 2021 04:02 PM PDT

Issue: Task Scheduler not responding when attempting to create a new trigger or edit a trigger in Windows 2008 R2

The issue happens on ALL our Windows 2008 R2 servers. The issue happens regardless of which user account I log on with.

I also notice that once Task Scheduler freezes and I close it, I can re-open task scheduler and sometimes I am able to edit a trigger or create a new trigger.

Here is the full error message from event viewer regarding the Task Scheduler: screen capture of Task Scheduler error

zabbix 3.2 - How to monitor RHEL7.2 services in zabbix

Posted: 04 Jul 2021 06:01 PM PDT

I am new in zabbix monitoring. I have 2 operation system to monitor windows and Linux and i have done monitoring windows services but i not getting any result on google to monitor linux server services.

I want to monitor service status, its CPU and RAM utilization. I have done some research i found some thing but its not gonna work

net.tcp.service['redis-server'] // its not working   net.tcp.listen[6379] // its return the status for service but dont want to use   

I want to monitor services based on service name like i did in windows

service.info[redis-server,state]  

its working in my windows Please help for Linux services monitoring

Nginx redirecting every url to localhost

Posted: 04 Jul 2021 08:02 PM PDT

I have a Django website running with Nginx and Gunicorn. Everytime I call a url on the server, example website/url, it redirects to localhost/url. I have given the nginx settings in both nginx.conf and sites-available/site-name

nginx.conf:

user www-data;  worker_processes auto;  pid /run/nginx.pid;    events {      worker_connections 768;      # multi_accept on;  }    http {    ##  # Basic Settings  ##      client_max_body_size 5M;  sendfile on;  tcp_nopush on;  tcp_nodelay on;  keepalive_timeout 65;  types_hash_max_size 2048;  # server_tokens off;    # server_names_hash_bucket_size 64;  server_name_in_redirect off;    include /etc/nginx/mime.types;  default_type application/octet-stream;    ##  # SSL Settings  ##    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE  ssl_prefer_server_ciphers on;    ##  # Logging Settings  ##    access_log /var/log/nginx/access.log;  error_log /var/log/nginx/error.log;    ##  # Gzip Settings  ##    gzip on;  gzip_disable "msie6";    # gzip_vary on;  # gzip_proxied any;  # gzip_comp_level 6;  # gzip_buffers 16 8k;  # gzip_http_version 1.1;  # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;    ##  # Virtual Host Configs  ##    include /etc/nginx/conf.d/*.conf;  include /etc/nginx/sites-enabled/*;  

}

nginx/sites-available/site name

server {      listen 80;      server_name url;      server_name_in_redirect off;        access_log /var/log/nginx/access.log;      error_log /var/log/nginx/error.log;        location /static/ {          alias /opt/itc/iitb-tech/static/;      }        location / {              proxy_pass http://unix:/opt/itc/iitb-tech/itc_iitb/itc_iitb.sock;              proxy_set_header X-Forwarded-Host url.org;              proxy_set_header X-Real-IP $remote_addr;              add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';      }  }  

Django settings file

"""  Django settings for itc_iitb project.    Generated by 'django-admin startproject' using Django 1.8.7.    For more information on this file, see  https://docs.djangoproject.com/en/1.8/topics/settings/    For the full list of settings and their values, see  https://docs.djangoproject.com/en/1.8/ref/settings/  """    # Build paths inside the project like this: os.path.join(BASE_DIR,   ...)  import os   BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))      # Quick-start development settings - unsuitable for production  # See   https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/    # SECURITY WARNING: don't run with debug turned on in production!  DEBUG = False    ALLOWED_HOSTS = ['server ip','127.0.0.1','localhost']    ALLOWED_PORTS = ['*']  # Application definition    INSTALLED_APPS = (  'aero',  'erc',  'biotech',  'mnp',  'krittika',  'main',  'srg',  'ITSP2017',  'ec',  'django.contrib.admin',  'django.contrib.auth',  'django.contrib.contenttypes',  'django.contrib.sessions',  'django.contrib.messages',  'django.contrib.staticfiles',  )    MIDDLEWARE_CLASSES = (      'django.contrib.sessions.middleware.SessionMiddleware',      'django.middleware.common.CommonMiddleware',      'django.middleware.csrf.CsrfViewMiddleware',      'django.contrib.auth.middleware.AuthenticationMiddleware',      'django.contrib.auth.middleware.SessionAuthenticationMiddleware',      'django.contrib.messages.middleware.MessageMiddleware',      'django.middleware.clickjacking.XFrameOptionsMiddleware',      'django.middleware.security.SecurityMiddleware',  )  ROOT_URLCONF = 'itc_iitb.urls'    TEMPLATES = [  {      'BACKEND': 'django.template.backends.django.DjangoTemplates',      'DIRS': [],      'APP_DIRS': True,      'OPTIONS': {          'context_processors': [              'django.template.context_processors.debug',              'django.template.context_processors.request',              'django.contrib.auth.context_processors.auth',              'django.contrib.messages.context_processors.messages',          ],      },    },   ]     WSGI_APPLICATION = 'itc_iitb.wsgi.application'      # Database  # https://docs.djangoproject.com/en/1.8/ref/settings/#databases    DATABASES = {      'default': {          'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add   'postgres$          'NAME': 'dbname',                      # Or path to database    file$          # The following settings are not used with sqlite3:          'USER': 'user',          'PASSWORD': 'pwd',          'HOST': 'localhost',                      # Empty for   localhost thr$          'PORT': '',                      # Set to empty string for   default.      }  }    # Internationalization  # https://docs.djangoproject.com/en/1.8/topics/i18n/    LANGUAGE_CODE = 'en-us'    TIME_ZONE = 'UTC'    USE_I18N = True    USE_L10N = True    USE_TZ = True    STATIC_URL = '/static/'  STATIC_ROOT = "/opt/itc/iitb-tech/static/"  MEDIA_ROOT = BASE_DIR + '/ITSP2017/static/media/'  MEDIA_URL = '/static/media/'    APPEND_SLASH = True  

heroku redis error : NOAUTH Authentication required

Posted: 04 Jul 2021 07:01 PM PDT

I wanna deploy my django project to heroku. My project uses channels so I need to set up ASGI based environment.

I have two installed add-ons in heroku app. One is Heroku-postgresql and the other is Heroku-redis.

And I have two dynos please refer to below picture.

addons, dyno:

enter image description here

I succeeded push django project on heroku git. $ git push heroku master

build, deployed:

enter image description here

So, I got url address. URL : demo-multichat.herokuapp.com

If you access that url, you will face Application error.

I got log msg using $ heroku logs. Below is log messages.

2017-09-08T11:54:50.421663+00:00 app[web.1]: 2017-09-08 20:54:50,421 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:54:55.424117+00:00 app[web.1]: 2017-09-08 20:54:55,423 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:00.426387+00:00 app[web.1]: 2017-09-08 20:55:00,426 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:05.428815+00:00 app[web.1]: 2017-09-08 20:55:05,428 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:10.430511+00:00 app[web.1]: 2017-09-08 20:55:10,430 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:15.432112+00:00 app[web.1]: 2017-09-08 20:55:15,431 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:18.486722+00:00 heroku[worker.1]: State changed from crashed to starting  2017-09-08T11:55:20.434001+00:00 app[web.1]: 2017-09-08 20:55:20,433 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:22.996928+00:00 heroku[worker.1]: Starting process with command `python manage.py runworker`  2017-09-08T11:55:23.685971+00:00 heroku[worker.1]: State changed from starting to up  2017-09-08T11:55:25.435766+00:00 app[web.1]: 2017-09-08 20:55:25,435 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:27.264386+00:00 app[worker.1]: 2017-09-08 20:55:27,264 - INFO - runworker - Using single-threaded worker.  2017-09-08T11:55:27.264803+00:00 app[worker.1]: 2017-09-08 20:55:27,264 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)  2017-09-08T11:55:27.265237+00:00 app[worker.1]: 2017-09-08 20:55:27,264 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive  2017-09-08T11:55:27.276810+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 484, in connect  2017-09-08T11:55:27.277217+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 541, in _connect  2017-09-08T11:55:27.277631+00:00 app[worker.1]:     raise err  2017-09-08T11:55:27.276796+00:00 app[worker.1]: Traceback (most recent call last):  2017-09-08T11:55:27.277635+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 529, in _connect  2017-09-08T11:55:27.277992+00:00 app[worker.1]:     sock.connect(socket_address)  2017-09-08T11:55:27.278042+00:00 app[worker.1]: ConnectionRefusedError: [Errno 111] Connection refused  2017-09-08T11:55:27.277199+00:00 app[worker.1]:     sock = self._connect()  2017-09-08T11:55:27.278044+00:00 app[worker.1]:   2017-09-08T11:55:27.278045+00:00 app[worker.1]: During handling of the above exception, another exception occurred:  2017-09-08T11:55:27.278045+00:00 app[worker.1]:   2017-09-08T11:55:27.278049+00:00 app[worker.1]: Traceback (most recent call last):  2017-09-08T11:55:27.278050+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/client.py", line 667, in execute_command  2017-09-08T11:55:27.278455+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 610, in send_command  2017-09-08T11:55:27.278453+00:00 app[worker.1]:     connection.send_command(*args)  2017-09-08T11:55:27.278858+00:00 app[worker.1]:     self.send_packed_command(self.pack_command(*args))  2017-09-08T11:55:27.278862+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 585, in send_packed_command  2017-09-08T11:55:27.279107+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 489, in connect  2017-09-08T11:55:27.279104+00:00 app[worker.1]:     self.connect()  2017-09-08T11:55:27.279337+00:00 app[worker.1]: redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.  2017-09-08T11:55:27.279322+00:00 app[worker.1]:     raise ConnectionError(self._error_message(e))  2017-09-08T11:55:27.279338+00:00 app[worker.1]:   2017-09-08T11:55:27.279339+00:00 app[worker.1]: During handling of the above exception, another exception occurred:  2017-09-08T11:55:27.279339+00:00 app[worker.1]:   2017-09-08T11:55:27.279340+00:00 app[worker.1]: Traceback (most recent call last):  2017-09-08T11:55:27.279340+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 484, in connect  2017-09-08T11:55:27.279538+00:00 app[worker.1]:     sock = self._connect()  2017-09-08T11:55:27.279541+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 541, in _connect  2017-09-08T11:55:27.279757+00:00 app[worker.1]:     raise err  2017-09-08T11:55:27.279760+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 529, in _connect  2017-09-08T11:55:27.279971+00:00 app[worker.1]:     sock.connect(socket_address)  2017-09-08T11:55:27.279974+00:00 app[worker.1]: ConnectionRefusedError: [Errno 111] Connection refused  2017-09-08T11:55:27.279976+00:00 app[worker.1]:   2017-09-08T11:55:27.279977+00:00 app[worker.1]: During handling of the above exception, another exception occurred:  2017-09-08T11:55:27.279978+00:00 app[worker.1]:   2017-09-08T11:55:27.279991+00:00 app[worker.1]: Traceback (most recent call last):  2017-09-08T11:55:27.279993+00:00 app[worker.1]:   File "manage.py", line 10, in <module>  2017-09-08T11:55:27.280072+00:00 app[worker.1]:     execute_from_command_line(sys.argv)  2017-09-08T11:55:27.280083+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line  2017-09-08T11:55:27.280251+00:00 app[worker.1]:     utility.execute()  2017-09-08T11:55:27.280254+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/django/core/management/__init__.py", line 355, in execute  2017-09-08T11:55:27.280421+00:00 app[worker.1]:     self.fetch_command(subcommand).run_from_argv(self.argv)  2017-09-08T11:55:27.280424+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv  2017-09-08T11:55:27.280590+00:00 app[worker.1]:     self.execute(*args, **cmd_options)  2017-09-08T11:55:27.280591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute  2017-09-08T11:55:27.280762+00:00 app[worker.1]:     output = self.handle(*args, **options)  2017-09-08T11:55:27.280763+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/channels/management/commands/runworker.py", line 83, in handle  2017-09-08T11:55:27.280878+00:00 app[worker.1]:     worker.run()  2017-09-08T11:55:27.280879+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/channels/worker.py", line 87, in run  2017-09-08T11:55:27.280972+00:00 app[worker.1]:     channel, content = self.channel_layer.receive_many(channels, block=True)  2017-09-08T11:55:27.280973+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/asgiref/base_layer.py", line 43, in receive_many  2017-09-08T11:55:27.282189+00:00 app[worker.1]:     return self.receive(channels, block)  2017-09-08T11:55:27.282191+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/asgi_redis/core.py", line 215, in receive  2017-09-08T11:55:27.282191+00:00 app[worker.1]:     result = connection.blpop(list_names, timeout=self.blpop_timeout)  2017-09-08T11:55:27.282192+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/client.py", line 1269, in blpop  2017-09-08T11:55:27.282193+00:00 app[worker.1]:     return self.execute_command('BLPOP', *keys)  2017-09-08T11:55:27.282193+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/client.py", line 673, in execute_command  2017-09-08T11:55:27.282194+00:00 app[worker.1]:     connection.send_command(*args)  2017-09-08T11:55:27.282195+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 610, in send_command  2017-09-08T11:55:27.282196+00:00 app[worker.1]:     self.send_packed_command(self.pack_command(*args))  2017-09-08T11:55:27.282196+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 585, in send_packed_command  2017-09-08T11:55:27.282453+00:00 app[worker.1]:     self.connect()  2017-09-08T11:55:27.282454+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/redis/connection.py", line 489, in connect  2017-09-08T11:55:27.282744+00:00 app[worker.1]:     raise ConnectionError(self._error_message(e))  2017-09-08T11:55:27.282772+00:00 app[worker.1]: redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.  2017-09-08T11:55:27.459143+00:00 heroku[worker.1]: State changed from up to crashed  2017-09-08T11:55:27.446333+00:00 heroku[worker.1]: Process exited with status 1  2017-09-08T11:55:10+00:00 app[heroku-redis]: source=REDIS sample#active-connections=2 sample#load-avg-1m=0.315 sample#load-avg-5m=0.21 sample#load-avg-15m=0.16 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664264kB sample#memory-free=11905356kB sample#memory-cached=961068kB sample#memory-redis=299032bytes sample#hit-rate=1 sample#evicted-keys=0  2017-09-08T11:55:30.437885+00:00 app[web.1]: 2017-09-08 20:55:30,437 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:35.439587+00:00 app[web.1]: 2017-09-08 20:55:35,439 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:40.441760+00:00 app[web.1]: 2017-09-08 20:55:40,441 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:45.443072+00:00 app[web.1]: 2017-09-08 20:55:45,442 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:50.444948+00:00 app[web.1]: 2017-09-08 20:55:50,444 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:55:55.446645+00:00 app[web.1]: 2017-09-08 20:55:55,446 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:00.449178+00:00 app[web.1]: 2017-09-08 20:56:00,448 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:05.450944+00:00 app[web.1]: 2017-09-08 20:56:05,450 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:10.452898+00:00 app[web.1]: 2017-09-08 20:56:10,452 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:15.456223+00:00 app[web.1]: 2017-09-08 20:56:15,456 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:20.457782+00:00 app[web.1]: 2017-09-08 20:56:20,457 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:25.459987+00:00 app[web.1]: 2017-09-08 20:56:25,459 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:30.461919+00:00 app[web.1]: 2017-09-08 20:56:30,461 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:08+00:00 app[heroku-redis]: source=REDIS sample#active-connections=2 sample#load-avg-1m=0.305 sample#load-avg-5m=0.24 sample#load-avg-15m=0.17 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664264kB sample#memory-free=11911660kB sample#memory-cached=961060kB sample#memory-redis=299032bytes sample#hit-rate=1 sample#evicted-keys=0  2017-09-08T11:56:35.463998+00:00 app[web.1]: 2017-09-08 20:56:35,463 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:40.466396+00:00 app[web.1]: 2017-09-08 20:56:40,466 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:45.467830+00:00 app[web.1]: 2017-09-08 20:56:45,467 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:50.469777+00:00 app[web.1]: 2017-09-08 20:56:50,469 ERROR    Error trying to receive messages: NOAUTH Authentication required.  2017-09-08T11:56:55.471533+00:00 app[web.1]: 2017-09-08 20:56:55,471 ERROR    Error trying to receive messages: NOAUTH Authentication required.  

Among the various errors, NOAUTH Authentication required stand out.

What's wrong with me?? I wonder I correctly set up significant files(asgi.py, wsgi.py, settings.py, manage.py, Procfile).

Could you look into those files? I leave my project github repository.

URL : github.com/juhyun16/multichat

multichat/asgi.py, wsgi.py

settings/dev.py, prod.py, settings.py

manage.py, Procfile

If settings are no problem, then what's wrong?? I appreciate your help. p.s.)My reputation is less than 10. So I can't post more than 2 links. Sorry for inconvenience.

How to remove suspended IIS application pool

Posted: 04 Jul 2021 05:03 PM PDT

I have a web server IIS 8.5. OS v: Windows Server 2012.

When number of Hit increase then huge php-cgi.exe open in windows process.But after some time lot of process shows suspended state.

As a result, site down and system make too slow to response. There have a recycling/ restart option in IIS application pool.

But, if i restart the application pool in every minute then current connected users will be unreachable. How we can remove only the suspended process.please check the images.

enter image description here enter image description here enter image description here

Nginx's location and proxy_pass missing a request when hash is present in url

Posted: 04 Jul 2021 07:01 PM PDT

I have this config

server {    listen 8080;    access_log  /var/log/nginx/access.log;    root        /usr/share/nginx/htdocs;    index       index.html index.htm;    port_in_redirect off;      location /somepath/ {        proxy_pass http://someinternalserver/somepath/;    }      location /health {      return 200;    }  }  

When I access it like this, http://our-external-fqdn/somepath/ it works.

However, when I access it like this, http://our-external-fqdn/somepath/# I get redirected to our local development setup which is http://localhost:8000

What am I missing?

Windows Server notify when installing updates finished

Posted: 04 Jul 2021 04:02 PM PDT

The Windows Servers in our environment (2008 R2 and 2012 R2) install updates automatically once a month, but don't restart automatically (yet), because the WSUS server should be the last server to restart and not all domain controllers should restart at the same time. How can I setup a system where each server reports to a server when it finished installing all updates? I'd like to write a script that gets the update status of a server and then decides if the server is allowed to reboot.

Also, I miss (WSUS/Windows) update notifications in general, for example if an update fails, I'd like to get a notification and not having to check each server manually.

nginx recv() failed (104: Connection reset by peer) while reading response header from upstream

Posted: 04 Jul 2021 03:21 PM PDT

I have recently upgrade my magento from 1.5 to 1.9 and when ever I add a certain product to basket, I started to receive this error: 502 Bad Gateway

There were no log entries in the var/log/ folder: enter image description here

So, I had a look at my nginx errors and I found the following entries in nginx-errors.log:

2015/04/09 10:58:03 [error] 15208#0: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 46.xxx.xxx.xxx, server: dev.my-domain.co.uk, request: "POST /checkout/cart/add/uenc/aHR0cDovL2Rldi5zYWx2ZW8uY28udWsvdGludGktYmF0aC1wYWludGluZy1zb2FwLTcwbWwuaHRtbD9fX19TSUQ9VQ,,/product/15066/form_key/eYLc3lQ35BSrk6Pa/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fcgi-www-data.sock:", host: "dev.my-domain.co.uk", referrer: "http://dev.my-domain.co.uk/tinti-bath-painting-soap-70ml.html"  2015/04/09 11:04:42 [error] 15208#0: *13 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 46.xxx.xxx.xxx, server: dev.my-domain.co.uk, request: "POST /checkout/cart/add/uenc/aHR0cDovL2Rldi5zYWx2ZW8uY28udWsvdGludGktYmF0aC1wYWludGluZy1zb2FwLTcwbWwuaHRtbD9fX19TSUQ9VQ,,/product/15066/form_key/eYLc3lQ35BSrk6Pa/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fcgi-www-data.sock:", host: "dev.my-domain.co.uk", referrer: "http://dev.my-domain.co.uk/tinti-bath-painting-soap-70ml.html"  2015/04/09 11:05:03 [error] 15208#0: *16 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 46.xxx.xxx.xxx, server: dev.my-domain.co.uk, request: "POST /checkout/cart/add/uenc/aHR0cDovL2Rldi5zYWx2ZW8uY28udWsvdGludGktYmF0aC1wYWludGluZy1zb2FwLTcwbWwuaHRtbD9fX19TSUQ9VQ,,/product/15066/form_key/eYLc3lQ35BSrk6Pa/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fcgi-www-data.sock:", host: "dev.my-domain.co.uk", referrer: "http://dev.my-domain.co.uk/tinti-bath-painting-soap-70ml.html"  2015/04/09 11:12:07 [error] 15273#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 46.xxx.xxx.xxx, server: dev.my-domain.co.uk, request: "POST /checkout/cart/add/uenc/aHR0cDovL2Rldi5zYWx2ZW8uY28udWsvdGludGktYmF0aC1wYWludGluZy1zb2FwLTcwbWwuaHRtbD9fX19TSUQ9VQ,,/product/15066/form_key/eYLc3lQ35BSrk6Pa/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fcgi-www-data.sock:", host: "dev.my-domain.co.uk", referrer: "http://dev.my-domain.co.uk/tinti-bath-painting-soap-70ml.html"  

I have installed magento on a custom LEMP stack, here are the configurations:

This error only appears to be happening when I add a specific product to basket in my upgraded magento and every time the error occurs, I can see a core.XXXXX file (which is approx 350mb) in the public_html folder.

Any idea why my php-fpm is crashing like this? How can I find the cause and fix it?

Here's the last entries on my Linux (CentOS) server, when I run dmesg command:

php-fpm[14862]: segfault at 7fff38236ff8 ip 00000000005c02ba sp 00007fff38237000 error 6 in php-fpm[400000+325000]  php-fpm[15022]: segfault at 7fff38351ff0 ip 00000000005bf6e5 sp 00007fff38351fb0 error 6 in php-fpm[400000+325000]  php-fpm[15021]: segfault at 7fff38351ff0 ip 00000000005bf6e5 sp 00007fff38351fb0 error 6 in php-fpm[400000+325000]  php-fpm[15156]: segfault at 7fff38351ff0 ip 00000000005bf6e5 sp 00007fff38351fb0 error 6 in php-fpm[400000+325000]  php-fpm[15024]: segfault at 7fff38351ff0 ip 00000000005bf6e5 sp 00007fff38351fb0 error 6 in php-fpm[400000+325000]  php-fpm[15223]: segfault at 7fff8d1d5fd8 ip 00000000005c02ba sp 00007fff8d1d5fe0 error 6 in php-fpm[400000+325000]  php-fpm[15222]: segfault at 7fff8d1d5fd8 ip 00000000005c02ba sp 00007fff8d1d5fe0 error 6 in php-fpm[400000+325000]  php-fpm[15225]: segfault at 7fff8d1d5fd8 ip 00000000005c02ba sp 00007fff8d1d5fe0 error 6 in php-fpm[400000+325000]  php-fpm[15227]: segfault at 7fff8d1d5fd8 ip 00000000005c02ba sp 00007fff8d1d5fe0 error 6 in php-fpm[400000+325000]  php-fpm[15362]: segfault at 7fff3118afd0 ip 00000000005c0ace sp 00007fff3118afa0 error 6 in php-fpm[400000+325000]  

I analysed the core dump with gdb and this is what I see for the first two frames: http://pastebin.com/raw.php?i=aPvB1sWv (doesn't make much sense to me)...

Is it really possible to use PHP sessions for authentication with nginx fastcgi cache?

Posted: 04 Jul 2021 09:07 PM PDT

I have recently switched an opencart instance from Apache+mod_php to nginx+fastcgi+php-fpm. I have been trying to leverage caching for most pages through fastcgi-cache.

Unfortunately, many users began reporting ghost orders or taking over others accounts (weeee!!!!) From exhaustive digging, it appears that pages were cached with set-cookie! So subsequent users who did not send a pre-existing session cookie were getting the cache-initiator's session cookie. Bad!

According to all the documentation out there, the following settings are supposed to prevent this from happening (to the best of my understanding at least:)

 fastcgi_pass_header Set-Cookie;   fastcgi_pass_header Cookie;   fastcgi_ignore_headers Cache-Control Expires Set-Cookie;  

When I looked through the individual caches I noticed several pages with set-cookie: [somerandomsessionid] According to nginx documentation under fastcgi_cache_valid...

If the header includes the "Set-Cookie" field, such a response will not be cached.

By including Set-Cookie with fastcgi_ignore_headers, am I telling it to cache set-cookie? In many examples, Set-Cookie is part of the arguments to fastcgi_ignore_headers. Or is it supposed to prevent Set-Cookie from being processed even though it's obviously in the cached files?

Here are the pertinent parts of my configuration:

location ~ .php$ { ...

fastcgi_next_upstream error timeout invalid_header http_500 http_503;  fastcgi_cache OPENCART;  fastcgi_cache_bypass $no_cache;  fastcgi_no_cache $no_cache;  fastcgi_cache_purge $purge_method;  fastcgi_cache_methods GET HEAD;  fastcgi_cache_valid 200 5m;  fastcgi_cache_use_stale error timeout invalid_header http_500;  fastcgi_pass_header Set-Cookie;  #fastcgi_hide_header Set-Cookie;  fastcgi_pass_header Cookie;  fastcgi_ignore_headers Cache-Control Expires Set-Cookie;  

My cache bypass rules (called in /etc/conf.d)...

################## Fast CGI Cache Settings    # if we find a PHP session cookie, let's cache it's contents    map $http_cookie $php_session_cookie {      default "";      ~PHPSESSID=(?<sessionkey>[a-zA-Z0-9]+) $sessionkey; # PHP session cookie  }    fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=OPENCART:5m max_size=10000m inactive=15m;  fastcgi_cache_key "$scheme$request_method$host$request_uri$is_mobile$php_session_cookie";    map $request_method $purge_method {      PURGE   1;      default 0;  }    ################## Cache Header    add_header X-Cache $upstream_cache_status;    ################## Cache Bypass Maps    #Don't cache the following URLs  map $request_uri $no_cache_uri {      default 0;      ~*/admin/ 1;      ~*/dl/ 1;  }    # ~*/music/mp3_[^/]+/[0-9]+/.+$ 1;    map $query_string $no_cache_query {      default 0;      ~*route=module/cart$ 1;      ~*route=account/ 1; #exclude account links      ~*route=checkout/ 1; #exclude checkout links      ~*route=module/founders 1;      ~*route=module/cart 1;      ~*route=product/product/captcha 1;      ~*nocache=1 1; # exclude ajax blocks and provide for manual cache override  }    map $http_cookie $no_cache_cookie {      default 0;  }      map $http_x_requested_with $no_cache_ajax {      default 0;      XMLHttpRequest 1; # Don't cache AJAX  }    map $sent_http_x_no_cache $no_no_cache {      default 0;      on 1; # Don't cache generic header when present and set to "on"  }    ## Combine all results to get the cache bypass mapping.  map $no_cache_uri$no_cache_query$no_cache_cookie$no_cache_ajax$no_no_cache $no_cache {          default 1;          00000 0;  }  

Session setting in php.ini

session.auto_start = 1  session.cache_expire = 180  session.cache_limiter = nocache  session.cookie_lifetime = 0  session.cookie_path = /  session.cookie_secure = 1  session.gc_divisor = 1000  session.gc_maxlifetime = 3600  session.gc_probability = 0  session.hash_function = "sha256"  session.name = PHPSESSID  session.serialize_handler = php  session.use_cookies = 1  session.use_only_cookies = 1  session.use_strict_mode = 1  session.use_trans_sid = 0  

Opencart using session_start() on every page load so bypassing php session does me no good for the most part. If there were a way to prevent Set-Cookie headers from ending up in the cache, this would probably work for me.

Can anyone point me in the right direction?

grub on debian wheezy doesn't recognise mdraid

Posted: 04 Jul 2021 08:02 PM PDT

We're having a really strange problem when trying to boot from an mdraid on Debian Wheezy. All references online that I can find tell me that grub 1.99 (which is part of Debian Wheezy) should have no trouble booting from an mdraid volume. We're keep getting a grub rescue prompt, however.

We verified that grub can work with the disks, since booting from a non-raid ext4 formatted partition works without problem. As soon as we put /boot on a RAID array we created with mdadm, grub no longer recognises it.

Although we started out with a RAID5 array with LVM on top, while testing we've moved back to a simply /boot on a 4-disk RAID1 array. These are 4TB disks, so we're using GPT. We installed grub on all disks with the following command:

grub-install --no-floppy --modules="raid mdraid09 mdraid1x" /dev/sda  

And for sdb, sdc and sdd, of course.

Grub keeps throwing us to grub rescue. An ls at this time only shows disks and gpt partitions, no md partitions. We've tried recreating the RAID1 with --metadata=0.9, but that didn't change the behaviour at all.

The hardware is a Dell PowerEdge R520 with the PERC 710i RAID controller. We've create RAID0s in the RAID controller for each disk and this seems to work as expected.

No obvious errors are thrown at installation time, either the OS or grub complain.

A reinstall of the grub-pc package doesn't solve the problem either.

We have no further idea what to try and are hoping for some input!

EDIT

We have indeed installed grub to every disk. We are getting a grub prompt, it just cannot read the mdraid. If we add a 'normal' ext4 partition to the machine to boot from, it works.

How to see login history of an user in exchange 2010

Posted: 04 Jul 2021 03:03 PM PDT

How to see login history of an user in exchange 2010? We tried with Get-LogonStatistics but it only shows the last login time, same goes for Get-MailboxStatistics.

I see some data in IIS logs but I'm not sure if user logged in or not.

Why files disappear when using autofs

Posted: 04 Jul 2021 06:05 PM PDT

I have a disk mount as /local in the centos 5.3. Under /local/foldera/folderb/ I have lots of folders like:

drwxrwxrwx root root 4096 DEC 10 10:40 Platformxx  

Then I've build another server and on ther server many Platform like on the above.I want to use autofs to automatically mount those folders to my server through nfs.Just use mount command everything works fine.Then i choose to use the autofs.In /etc/auto.master I write like this:

/local/foldera/folderb /etc/auto.nfs  

and at the /etc/auto.nfs:

Platformxx -rw,soft xx.xx.xx.xx:/xx/Platformxx  

After config that,server autofs reload.When I open every folder in folderb. Error tells me the system cannot find the folder.And i cannot see anything in the folderb.The command 'll' returns total 0. After i delete the record i added to auto.master and reboot the server, those folders come back. So what's wrong

Robocopy mirror to Samba keeps rewriting files

Posted: 04 Jul 2021 06:01 PM PDT

We have a Samba share to which I am copying files with robocopy /mir. Problem is the next time I try and sync that with robocopy /mir it copies all files again even though they haven't changed. I can't see why robocopy should be detecting all these files as changed when they haven't been.

Any ideas?

Thanks, Nick

"MySQL Server has gone away" on simple select

Posted: 04 Jul 2021 03:03 PM PDT

I get a MySQL server has gone away error on a simple query.

mysql> select version();  ERROR 2006 (HY000): MySQL server has gone away  No connection. Trying to reconnect...  Connection id:    1  

A query such as select id from users limit 1; can also result in this error. Most search results are about inserts or imports so I'm not really sure what to do.

Apache 2 .htaccess matching all sub directories of current directory

Posted: 04 Jul 2021 05:03 PM PDT

I want to disable the php engine in all directories below the current one.

I have tried using <Directory> and <DirectoryMatch> but cannot find the correct regex syntax for matching just sub directories.

Example directory structure:

files/folder1/ files/folder2/ files/folder3/folder3a

I want to match folder1/, folder2/, folder3/ and folder3a/ but not files/

Any ideas?

No comments:

Post a Comment