Sunday, January 16, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Zabbix proxy under error "failed to update local proxy configuration copy"

Posted: 16 Jan 2022 03:53 AM PST

This morning I've installed a Zabbix Proxy on an host and I've seen this message in the error log /var/log/zabbix-proxy/zabbix_proxy.log using the default Template Linux:

received configuration data from server at "<OMISSIS>", datalen 10911  failed to update local proxy configuration copy: invalid field name "interface.bulk"  

Moreover I've inspected the hosts MySQL table of the Zabbix proxy and it has zero results, but I'm quite sure that the Zabbix proxy has lot of agents connected to it.

What's going on?

Updated:

My Zabbix server has version 4.0.4 and my Zabbix Proxy has version 5.0.8.

I just transfered domain name and now within my .htaccess file I have two codes which seem the same should one be deleted?

Posted: 16 Jan 2022 03:53 AM PST

RewriteEngine On  RewriteCond %{HTTP_HOST} lagiraudiere\.com [NC]  RewriteCond %{SERVER_PORT} 80  RewriteRule ^(.*)$ https://www.lagiraudiere.com/$1 [R,L]  

and this one

RewriteEngine On  RewriteCond %{HTTPS} off  RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]  

iptables: why are outgoing connections working even though no rules allow it

Posted: 16 Jan 2022 03:25 AM PST

The INPUT and OUTPUT chain policies are set to DROP. Very few rules allowing only specific traffic between directly cable-connected devices. However, if I temporarily add a cable that goes to the router, why can I initiate outgoing connections and receive answers, like do apt update, even though there are no rules allowing HTTP traffic in our out?

I have noticed that if I add iptables -P FORWARD DROP then those outgoing connections don't work anymore. Why does the FORWARD chain have any impact in this?

raspberrypi:~ $ sudo iptables -nvL  Chain INPUT (policy DROP 332 packets, 244K bytes)   pkts bytes target     prot opt in     out     source               destination               0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0              1254 79084 ACCEPT     tcp  --  *      *       66.66.66.5           66.66.66.3           tcp dpt:21385 ctstate NEW,ESTABLISHED   1453 2495K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination             Chain OUTPUT (policy DROP 373 packets, 47731 bytes)   pkts bytes target     prot opt in     out     source               destination            1715  162K ACCEPT     tcp  --  *      *       99.99.99.3           99.99.99.2           tcp dpt:5656      6   456 ACCEPT     udp  --  *      *       99.99.99.3           99.99.99.2           udp dpt:123    952  156K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED  

Secondary question: Is there any risk in using those easy to recognize IP addresses between my internally and directly cable-connected devices? Could packets leak, because those are valid public addresses?

Water Vapor inside the Camera Lens of the phone [closed]

Posted: 16 Jan 2022 01:08 AM PST

Does anyone have the same problem with me? There is water vapor inside the camera lens of my phone. What should I do? Does anyone try to put their phone in the rice bag and can this method worked?

Website domain not working with www but working without www

Posted: 16 Jan 2022 12:28 AM PST

My website is working without www but it is not working with www. I have also added CNAME with www.example.com but still it did not work. and I have also duplicate the entry for the zone's root - e.g., copy the IP address, make a new RR set of type A at 'www', and paste the IP address. But still it is not working. I am using Netlify for hosting.

DNS Godday for mail and AWS for rest

Posted: 16 Jan 2022 12:35 AM PST

I have a Goddady hosting account but for some of my websites I would like to host them in AWS S3. My question is how to set DNS entries in Route53 so that I keep web traffic served from S3 but I point email traffic to Goddady?

Thank you

How forward only all UDP traffic to OpenVPN tunX interface?

Posted: 16 Jan 2022 12:05 AM PST

I have client with tun0 interface and tunnel network 10.0.8.0/24 (server IP 10.0.8.1, i can ping this address from client side), i want forward only UDP traffic to that interface, how i can do it?

How to redirect user if direct access image files by browser? [nginx]

Posted: 16 Jan 2022 01:31 AM PST

How do I redirect if a user tries to direct access image files in browser only? I want to keep the ability to embed images with <img src="...">. How to redirect from

https://img.example.com/c/600x1200_90_webp/img-master/img/2022/01/03/08/04/54/95259215_p0_master1200.jpg

to

https://example.com/detail?id=95259215

This is my nginx conf

location ~ "^/c/600x1200_90_webp/img-master/img/\d+/\d+/\d+/\d+/\d+/\d+/(?<filename>.+\.(jpg|png|webp))$" {      return 301 https://example.com/detail?id=$filename;  }  

Code isn't working cause it's going to https://example.com/detail?id=95259215_p0_master1200.jpg but I need it to trim the string after the last digit of the filename so in this case trim off _p0_master1200.jpg. I also don't know how to make it if user is accessing it through browser.

docker health check for disk space not working as intended

Posted: 16 Jan 2022 02:42 AM PST

I've got an nginx container which ends up with a full disk after it's been running for about 10 days. So if a new version of the app isn't released, errors start to occur that look like;

2022/01/15 22:45:04 [crit] 13#13: *406812 mkdir() "/var/cache/nginx/uwsgi_temp/9/07" failed (28: No space left on device) while reading upstream...    2022/01/15 22:44:37 [crit] 13#13: *406820 pwritev() "/var/cache/nginx/client_temp/0000001078" failed (28: No space left on device)...  

This happened over the Christmas break so I thought the ideal situation here is to have the container health check ensure that there is free disk space. I thought I had achieved that with this container setup (but clearly not);

FROM nginx:1.21.5-alpine-perl    RUN apk update && \      apk add --no-cache dnsmasq supervisor curl    COPY ./config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf  COPY ./config/nginx.conf /etc/nginx/nginx.conf    HEALTHCHECK --interval=15s --timeout=30s \      CMD exit $(( $(df / | tail -n1 | awk '{print $5}' | sed 's/\%//') > 95 ? 1 : 0 )) || exit 1  

How should I check for disk space in the health check?

403 in nginx when accessing a directory

Posted: 16 Jan 2022 01:47 AM PST

I made a very simple server to test how a URL with folder behaves in nginx. Nginx is running in docker (nginx:latest image). Nginx runs user nginx (default set in /etc/nginx/nginx.conf).

server {          server_name example.com;            location /test/ {                  root /var/www/test;                  index index.html;          }  }  

and this structure:

/var/www  └── test      └── index.html    cat /var/www/test/index.html  Test    ls -l /var/ | grep www  drwxr-xr-x 3 root  root 4096 Jan 15 23:33 www    ls -l /var/www/test/  -rw-r--r-- 1 root root 7 Jan 15 21:08 index.html  

Now I have this issue:

curl http://example.com/test/  <html>  <head><title>403 Forbidden</title></head>  <body>  <center><h1>403 Forbidden</h1></center>  <hr><center>nginx/1.21.5</center>  </body>  </html>  
curl http://example.com/test  <html>  <head><title>301 Moved Permanently</title></head>  <body>  <center><h1>301 Moved Permanently</h1></center>  <hr><center>nginx/1.21.5</center>  </body>  </html>  

I expect to see "Test" when I access http://example.com/test or http://example.com/test/. What am I doing wrong?

Unable to connect GCP SQL Instance from GKE cluster

Posted: 16 Jan 2022 01:54 AM PST

I have created a vpc-native cluster and I am trying to connect from a pod inside the cluster to a postgres SQL instance with a private IP.

I am testing using a basic telnet 5432 command.

The connection works fine when I try it from a GCE instance that is in the same VPC. All connectivity tests in GCP are giving me green light so it seems to be a k8s issue.

Here is my cluster:

gcloud container clusters create alex-test \                                                                  --network=factory-vpc \      --region=europe-west1 \      --enable-ip-alias \      --subnetwork=europe-west1-factory-subnet \      --cluster-ipv4-cidr="/16" \      --services-ipv4-cidr="/20"  

Here is how I am testing the connectivity:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh  telnet <private ip> 5432  

Here is my network config in terraform:

resource "google_compute_network" "factory" {    name                    = "factory-vpc"    auto_create_subnetworks = false      depends_on = [google_project_service.compute]  }    resource "google_compute_subnetwork" "factory_subnet" {    name                     = "${var.region}-factory-subnet"    ip_cidr_range            = "10.0.0.0/16"    region                   = var.region    network                  = google_compute_network.factory.self_link    private_ip_google_access = true      secondary_ip_range {      ip_cidr_range = "10.2.0.0/16"      range_name    = "pods"    }      secondary_ip_range {      ip_cidr_range = "10.3.0.0/16"      range_name    = "services"    }  }    resource "google_compute_global_address" "gitlab_google_private_peering" {    provider      = google-beta    name          = "gitlab-gcp-private"    address_type  = "INTERNAL"    purpose       = "VPC_PEERING"    network       = google_compute_network.factory.self_link    prefix_length = 16  }    resource "google_service_networking_connection" "gitlab_google_private_peering" {    provider                = google-beta    network                 = google_compute_network.factory.self_link    service                 = "servicenetworking.googleapis.com"    reserved_peering_ranges = [google_compute_global_address.gitlab_google_private_peering.name]  }  

I have already checked the following documentation and articles, but nothing helps:

Any help is greatly appreciated !

my emr cluster is being terminated with error after the status is being set to starting

Posted: 16 Jan 2022 12:05 AM PST

Hi when I create EMR cluster. The status says it is being created but after 58 minutes it throws in error saying Master - 1: Error provisioning instances. Error message(Screenshot of error attached) I tried multiple times but all attempts was failed.

I was following the AWS documentation on how to create EMR cluster

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html  

Create EMR cluster on AWS(Picture from the documentation attached)

where did i go wrong? I want to successfully create EMR cluster and attach Jupiter notebook to the cluster. Is there a documentation to successfully create a cluster and make the cluster to run without being terminated after 58 Minutes

Please suggest me what has to be done.

Thankyou.

Nginx websocket reverse proxy remove location

Posted: 16 Jan 2022 01:50 AM PST

Trying to set up an nginx https reverse proxy to home assistant, this works:

...  location / {          proxy_pass http://127.0.0.1:8123;          proxy_redirect http:// https://;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection $connection_upgrade;  }  

But I would like to access it from a non-root url like home.local/ha. Similar to the goals of this question's asker. I have tried all of the answers on that thread, but none seem to allow the websockets to connect (I think). An example of something I tried:

...  location = /ha {          return 302 /ha/;  }    location /ha/ {          proxy_pass http://127.0.0.1:8123/;          #proxy_redirect http:// https://;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection $connection_upgrade;  }  

Any Ideas how I can get this to work?

Local devices aren't reachable via VPN

Posted: 16 Jan 2022 12:56 AM PST

I have a VPN configured on a router (router model is bintec be.ip plus).

VPN Connections are successfully established by the clients using IKEv2 (router is reachable via DynDNS).

Router's local ip address is 192.168.73.1. One of the local device's ip address is 192.168.73.150.

The problem is: sometimes the devices in the local network cannot be reached by the VPN clients. E.g. a ping fails:

> ping 192.168.73.150  PING 192.168.73.150 (192.168.73.150): 56 data bytes  Request timeout for icmp_seq 0  

The router itself is always reachable by the clients:

> ping 192.168.73.1  PING 192.168.73.1 (192.168.73.1): 56 data bytes  64 bytes from 192.168.73.1: icmp_seq=0 ttl=63 time=83.713 ms  

And the local device is always reachable by the router:

> ping 192.168.73.150  PING 192.168.73.150: 64 data bytes  64 bytes from 192.168.73.150: icmp_seq=0. time=0.569 ms  

As it only sometimes fails I doubt it's a firewall issue.

As the ping packets from the router to the device succeeds I doubt it's a local network issue.

I suspect some kind of routing issue but have absolutely no idea how to proceed with the problem.

Any ideas how to investigate further?

mongodb 3.2 cluster database compatibility to mongodb 5.0

Posted: 16 Jan 2022 01:59 AM PST

We have a mongodb 3.2 production cluster that we need to upgrade to mongodb 5.0.

Instead of upgrading in place, we are considering creating a new mongodb 5.0 cluster, export the DB from 3.2, and import the DB into the mongodb 5.0 cluster.

Will there be any issues with upgrading this way? What we are uncertain about is whether the database format has changed from version 3.2 to 5.0, and whether the database format conversion is only done during the upgrades from 3.2 -> 3.4 -> 3.6 -> ...

Thanks in advance.

Group Policy Management about:security_mmc

Posted: 16 Jan 2022 01:40 AM PST

In Group Policy Management, when I click on an existing GPO, I get an Internet Explorer Enhanced Security Configuration messaged that "about:security_mmc.exe" is not a trusted site.

Error received when selecting a GPO

It happens every time I click a different GPO. I read to add this to the trusted site list, which I did. I confirmed it is in the policy when I do gpresult.

GPResult

But I'm still getting this message. Anything else I need to do so this doesn't keep popping up?

Problems with setting up bonding on Netplan (Ubuntu server 18.04)

Posted: 16 Jan 2022 03:07 AM PST

I have a dual port network card that I want to bond both ports and balance the traffic between ports. I want 1 static IP address. I used to ubuntu 16.04 and this worked fine. Im now trying to set up the same thing in netplan and am struggling. My config is below...

network:  version: 2  renderer: networkd  ethernets:    enp1s0f0:      dhcp4: false      dhcp6: false    enp1s0f1:      dhcp4: false      dhcp6: false   bonds:     bond0:      dhcp4: false      dhcp6: false     interfaces:        - enp1s0f0       - enp1s0f1     addresses: [192.168.3.250/24]     gateway4: 192.168.3.1     parameters:       mode: 802.3ad     nameservers:       addresses: [8.8.8.8,8.8.4.4]  

Percona-Server-shared my.cnf file conflicts with file from package mysql-community-server-5.5

Posted: 16 Jan 2022 01:01 AM PST

I'm trying to install Percona Toolkit on the server where there is already mysql-community-server5.5.52 is running. I require pt-table-checksum and pt-table-snc utility, so I ran:

yum install percona-toolkit

I got the following error:

Transaction Check Error: file /etc/my.cnf from install of Percona-Server-shared-51-5.1.73-rel14.12.625.rhel6.x86_64 conflicts with file from package mysql-community-server-5.5.54-2.el6.x86_64

My setup is:

  • OS: CentOS release 6.8
  • Mysql: mysql-community-server-5.5.54-2.el6.x86_64
  • Percona Toolkit: 3.0.2

Domain admin can't edit GPOs on DC

Posted: 16 Jan 2022 02:04 AM PST

I have come across a weird situation.

We have 3 domain controllers, 2 Server 2008 R2 & 1 Server 2008, in our single domain environment. When I login to one of the DCs, let's say DC1, with my domain admin account and access Group Policy Management Console (GPMC), I can't edit any GPOs, and also I can see inaccessible next to few GPOs applied to the domain. However, with the same domain admin, when I access GPMC on another DC, I can see all the GPOs applied to the domain and I can also edit all the GPOs.

I have also noticed that under the problematic DC,DC1, I cannot see 2 GPOs at all under the Group Policy Objects node on GPMC. Whereas, I can see them on the other two DCs.

I have done a lot of research on this, but so far no luck!

Please help!

Painfully slow network when routed through Windows 2012 R2

Posted: 16 Jan 2022 01:01 AM PST

I have a Windows 2012 R2 server with two network adapters, on-board 1G one for the LAN and a 100M D-Link 530T connected to the internet. Internet Connection Sharing is set up on the latter. Client machines (Win7, WinXP) on the LAN can access the internet, but speedtest behavior is peculiar. If I choose a close speedtest server with small ping (1-10ms) I get almost full downlink utilization on both clients and server, but if I choose a faraway speedtest server (100ms) the server gets 50-70Mbps of download speed but clients hardly get 1Mbps for TCP traffic (UDP seems unaffected). Upload speed is the same, around 30Mbps on client and server. Every time I reboot the server, clients get the full 50-70Mbps for about 2-3 minutes and then slow down to a crawl. Occasionally this happens without a reboot, too, for no visible reason. I don't see abnormal CPU utilization on the server when speedtest is running. Wireshark captures show a lot of dup acks and retransmissions, but I captured from both server interfaces and the TCP packets that dup acks are re-requesting are there in the log, received on the outward-facing interface and forwarded to the LAN and incoming packets are missing from tight groups of 2-3 packets with very close (<10μsec) timestamps. I've googled and tried everything that seemed remotely related, to no effect.

tcptrace graph

Copying files from server to client over SMB, I get full 1Gbps. If I connect a Win7 client straight to the internet, I don't observe any slowdown. An older server on which I had WinXP and the same outward-facing D-Link network adapter, using the same wires, also didn't show such behavior, so network adapters, wires etc. aren't likely to be the problem. Please help, I don't want to install XP on my server again!

Here's some things I have tried without success:

  1. installing latest drivers;
  2. disabling/enabling interrupt moderation (server and client);
  3. disabling/enabling offloading (server/client);
  4. increasing receive and transmit buffers;
  5. enabling ECN and CTCP on the client;
  6. looking at delayed start services on the server and disabling them;
  7. turning off ICS and switching to RRAS for NAT and routing.

Domain shared folder with user restricted subfolders

Posted: 15 Jan 2022 11:02 PM PST

I have a domain running on a virtual windows server 2012 R2. Another virtual servers hosts our file server. To that end I need a shared folder accessible by all domain users. No problem there. However I would now like to restrict the access to the subfolders, and if possible not list those folders if they do not have access to them. In those subfolders they are allowed to do anything they like.

Let me illustrate this: We have domain users Alice and Bob, shared folder Z: with subfolders K, L, M.

Alice has access to K and L.

Bob has access to L and M.

Both should have be able to open Z. Alice sees folders K and L, whereas Bob sees folders L and M.

If Alice creates something in L, Bob can remove or modify it.

I have been messing around with share access, permissions and access-based enumeration, but so far no combination has got me close to what I need. Any suggestions are welcome. Thanks!

My biggest problem is probably:

How do you give everybody access to the shared folder, but restrict basically all permissions in that folder at the same time (except for viewing the subfolders they should have access to)

Trying to ChrootDirectory an SFTP user to their home directory

Posted: 15 Jan 2022 11:02 PM PST

I have followed a few examples of how to do this, all of them end up with modifying sshd_config to

Subsystem sftp internal-sftp    Match User chubbyninja      ChrootDirectory %h      AllowTCPForwarding no      X11Forwarding no      ForceCommand /usr/lib/openssh/sftp-server  

When I do this, I then sshd -t to make sure there are no errors then service sshd restart

once it's restarted, I try to SFTP (with filezilla) but I keep getting

Response:   fzSftp started  Command:    open "chubbyninja@xxx.xxx.xxx.xxx" 22  Command:    Pass: ********************  Error:  Network error: Software caused connection abort  Error:  Could not connect to server  

If i revert the config back to its original state, i can SFTP fine, but then i can browse any directory. Where I need users only in their home directory

My default config has this line in it:

Subsystem sftp /usr/lib/openssh/sftp-server  

Which is what i'm replacing with the above details.

I only have access to this machine over ssh, although i do have root access.

UPDATE After following sam_pan_mariusz's advice it appears to get further, but now I get

Response:   fzSftp started  Command:    open "chubbyninja@xxx.xxx.xxx.xxx" 22  Error:  Network error: Connection refused  Error:  Could not connect to server  

UPDATE 2

I have also followed Froggiz's advice and changed my config to this:

Subsystem sftp internal-sftp -u 0007 -f AUTH -l VERBOSE   Match Group chubbyninja       ChrootDirectory /home/chubbyninja       ForceCommand internal-sftp -u 0007       AllowTcpForwarding no       GatewayPorts no       X11Forwarding no  

but I get the original Software cased connection abort

I monitor /var/syslog but nothing shows up to indicate why there's this error

UPDATE 3 - Added sshd_config

# Package generated configuration file  # See the sshd_config(5) manpage for details    # What ports, IPs and protocols we listen for  Port 22  # Use these options to restrict which interfaces/protocols sshd will bind to  #ListenAddress ::  #ListenAddress 0.0.0.0  Protocol 2  # HostKeys for protocol version 2  HostKey /etc/ssh/ssh_host_rsa_key  HostKey /etc/ssh/ssh_host_dsa_key  HostKey /etc/ssh/ssh_host_ecdsa_key  HostKey /etc/ssh/ssh_host_ed25519_key  #Privilege Separation is turned on for security  UsePrivilegeSeparation yes    # Lifetime and size of ephemeral version 1 server key  KeyRegenerationInterval 3600  ServerKeyBits 1024    # Logging  SyslogFacility AUTH  LogLevel INFO    # Authentication:  LoginGraceTime 120  #PermitRootLogin without-password  PermitRootLogin yes  StrictModes yes    RSAAuthentication yes  PubkeyAuthentication yes  #AuthorizedKeysFile %h/.ssh/authorized_keys    # Don't read the user's ~/.rhosts and ~/.shosts files  IgnoreRhosts yes  # For this to work you will also need host keys in /etc/ssh_known_hosts  RhostsRSAAuthentication no  # similar for protocol version 2  HostbasedAuthentication no  # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication  #IgnoreUserKnownHosts yes    # To enable empty passwords, change to yes (NOT RECOMMENDED)  PermitEmptyPasswords no    # Change to yes to enable challenge-response passwords (beware issues with  # some PAM modules and threads)  ChallengeResponseAuthentication no    # Change to no to disable tunnelled clear text passwords  PasswordAuthentication yes    # Kerberos options  #KerberosAuthentication no  #KerberosGetAFSToken no  #KerberosOrLocalPasswd yes  #KerberosTicketCleanup yes    # GSSAPI options  #GSSAPIAuthentication no  #GSSAPICleanupCredentials yes    X11Forwarding yes  X11DisplayOffset 10  PrintMotd no  PrintLastLog yes  TCPKeepAlive yes  #UseLogin no    #MaxStartups 10:30:60  #Banner /etc/issue.net    # Allow client to pass locale environment variables  AcceptEnv LANG LC_*    Subsystem sftp internal-sftp -u 0007 -f AUTH -l VERBOSE    # Set this to 'yes' to enable PAM authentication, account processing,  # and session processing. If this is enabled, PAM authentication will  # be allowed through the ChallengeResponseAuthentication and  # PasswordAuthentication.  Depending on your PAM configuration,  # PAM authentication via ChallengeResponseAuthentication may bypass  # the setting of "PermitRootLogin without-password".  # If you just want the PAM account and session checks to run without  # PAM authentication, then enable this but set PasswordAuthentication  # and ChallengeResponseAuthentication to 'no'.  UsePAM yes        Match Group chubbyninja          ChrootDirectory /home/chubbyninja          AllowTCPForwarding no          X11Forwarding no          GatewayPorts no          ForceCommand internal-sftp -u 0007  

Use Openswan / IPSec on Ubuntu server to connect to existing Openswan VPN - NAT broken

Posted: 16 Jan 2022 12:05 AM PST

I have an existing Openswan VPN, all working fairly well with Windows, Mac and Phones.

[Office 192.168.0.0/24]---[VPN A.B.C.D]----[Internet]---[Home Routers (NAT), Dynamic IPs]----[Workstations]

Now I want to run an offsite backup server and connect it to the same VPN, still with a dynamic IP

[Office 192.168.0.0/24]---[VPN A.B.C.D]----[Internet]---[Home Router (NAT), Dynamic IP]----[zfsbackup]

IPSec seems to negotiate OK, this is the output on the backup side:

104 "net2net" #1: STATE_MAIN_I1: initiate  003 "net2net" #1: received Vendor ID payload [Openswan (this version) 2.6.38 ]  003 "net2net" #1: received Vendor ID payload [Dead Peer Detection]  003 "net2net" #1: received Vendor ID payload [RFC 3947] method set to=115  106 "net2net" #1: STATE_MAIN_I2: sent MI2, expecting MR2  003 "net2net" #1: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): i am NATed  108 "net2net" #1: STATE_MAIN_I3: sent MI3, expecting MR3  003 "net2net" #1: received Vendor ID payload [CAN-IKEv2]  004 "net2net" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024}  117 "net2net" #2: STATE_QUICK_I1: initiate  004 "net2net" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x9fecb28d <0x11f39e30 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=enabled}  

However when I try to start the connection in xl2tpd this happens:

xl2tpd[14438]: Listening on IP address 0.0.0.0, port 1701  xl2tpd[14438]: get_call: allocating new tunnel for host A.B.C.D, port 1701.  xl2tpd[14438]: Connecting to host A.B.C.D, port 1701  xl2tpd[14438]: control_finish: message type is (null)(0).  Tunnel is 0, call is 0.  xl2tpd[14438]: control_finish: sending SCCRQ  xl2tpd[14438]: network_thread: select timeout  ...  

Running packet capture on the machine at this stage shows that nothing is sent over the wire.

Something that concerns me is the log output when connecting. This is the Office side log for a windows connection:

"L2TP"[6] W.X.Y.Z #115: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x917a3b5e   <0xbc445896 xfrm=AES_128-HMAC_SHA1 NATOA=192.168.1.106 NATD=W.X.Y.Z:4500 DPD=none}  

This is for the backup server (Same network as windows pc above), note the port 1024

"L2TP"[11] W.X.Y.Z #23: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x11f39e30   <0x9fecb28d xfrm=AES_128-HMAC_SHA1 NATOA=192.168.1.2 NATD=W.X.Y.Z:1024 DPD=enabled}  

ipsec verify shows that pluto is listening on 500 and 4500. I can't find a way to get IPSec to pass through the correct port number. My programmer's guess is that the port isn't being read in from anywhere and it's defaulting to the first assignable userland port.

Note that I can reserve 1024 in advance using nc. Office VPN still gets told to use 1024 rather than say 1025. Pluto doesn't complain that 1024 is already in use.

nc -u -p 1024 localhost 22  

I really hope someone can help - Have spent a few hours on this both searching and tweaking.

Office config is as follows:

  config setup                nat_traversal=yes                virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12                oe=off                protostack=netkey                plutostderrlog=/var/log/pluto.log                nhelpers=0    conn L2TP                authby=secret                auto=add                pfs=no                type=transport                rekey=yes                compress=no                left=203.39.25.66                leftnexthop=%defaultroute                leftprotoport=17/1701                right=%any                rightsubnet=vhost:%no,%priv                rightprotoport=17/%any                forceencaps=no                dpddelay=40                dpdtimeout=130                dpdaction=clear  

Backup server config is:

    version 2.0        config setup              dumpdir=/var/run/pluto/              nat_traversal=yes              virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10              oe=off              protostack=netkey              plutostderrlog=/var/log/pluto.log        conn net2net              authby=secret              pfs=no              auto=add              ike=aes256-sha1;modp1024              compress=no              keyingtries=3              dpddelay=40              dpdtimeout=130              dpdaction=clear              rekey=yes              type=transport              left=%defaultroute              leftsubnet=192.168.1.0/24              leftnexthop=%defaultroute              leftprotoport=17/1701              right=A.B.C.D              rightprotoport=17/1701  

PHP-FPM with Nginx not Working on port 80

Posted: 15 Jan 2022 10:02 PM PST

I am trying out nginx on my Ubuntu 14.04 desktop, I have referred to some basic setup articles and have installed the latest Nginx and PHP-FPM from the apt repository. I have nginx working and can get my html pages displayed on the browser, however when I try to call a .php page it downloads the .php file instead of rendering webpage with .php output. I am using the following server definition in the /etc/nginx/sites-available/default :

server {  listen 80 default_server;  #listen [::]:80 default_server ipv6only=on;    root /home/munjal/public_html;  index index.html index.htm;    # Make site accessible from http://localhost/  server_name localhost;    location / {      # First attempt to serve request as file, then      # as directory, then fall back to displaying a 404.      # try_files $uri $uri/ =404;      # Uncomment to enable naxsi on this location      # include /etc/nginx/naxsi.rules  }    # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests  #location /RequestDenied {  #   proxy_pass http://127.0.0.1:8080;      #}    #error_page 404 /404.html;    # redirect server error pages to the static page /50x.html  #  #error_page 500 502 503 504 /50x.html;  #location = /50x.html {  #   root /usr/share/nginx/html;  #}          # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000  #  location ~ \.php$ {      fastcgi_split_path_info ^(.+\.php)(/.+)$;  #   # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini  #  #   # With php5-cgi alone:  #   fastcgi_pass 127.0.0.1:9000;  #   # With php5-fpm:      fastcgi_pass unix:/var/run/php5-fpm.sock;      fastcgi_index index.php;      include fastcgi_params;  }    # deny access to .htaccess files, if Apache's document root  # concurs with nginx's one  #  #location ~ /\.ht {  #   deny all;  #}  

}

However now if I change my server definition to listen on port 8080 instead of 80 then the request is passed to php-fpm and a web page is rendered with the php output :

listen 8080 default_server;  

S/MIME icon missing from OWA

Posted: 16 Jan 2022 02:04 AM PST

I'm trying to test S/MIME with OWA (Exchange 2010, Outlook 2010). Now my research has told me that the control must be installed first. So as someone with admin rights, open OWA, click on All Option, then Settings then the S/MIME icon and install the contorl. I also know it has to be done in IE, 32 bit. For myself and another freshly created user, it works fine. The icon is there. But I tested it with a third user, and there is no S/MIME icon. It's missing.

The fact it's there for 2 accounts says it's enabled in the Outlook Web App Mailbox policies. I even installed it on my account and it works. There is only the default policy, so it can't be the user is assigned to a policy where it's disabled. He's assigned to the same policy as me in any case.

So why do the other accounts have the icon, but the one account doesn't? Without the icon, I can't install the control.

Windows Server 2008 ARP Cache Poisioning

Posted: 15 Jan 2022 11:06 PM PST

Recently ran into a very strange problem.

Several applications were having issues communication through our F5 Load-Balancer. When we looked into it we found that the router had an incorrect ARP and MAC-ADDRESS table entry on the Load-Balancer VLAN. Those entries were pointing towards a Windows Server 2008 R2 box instead of the Load-Balancers external interface.

Now here is the strange thing. The hardware address in the MAC/ARP table entries did not exist on the Windows 2008 Server but it was very close. The Windows Server was on router interface Gi1/37 (below). The Load-Balancer External Address was 192.168.111.61 and the Windows Server was 192.168.111.125. Two totally different IP addresses in the same /24 subnet.

IPConfig on Windows Server

Ethernet adapter Local Area Connection:       Connection-specific DNS Suffix  . :     Description . . . . . . . . . . . : Intel(R) 82574L Gigabit Network Connect     Physical Address. . . . . . . . . : 00-E0-81-DF-15-FE     DHCP Enabled. . . . . . . . . . . : No     Autoconfiguration Enabled . . . . : Yes     Link-local IPv6 Address . . . . . : fe80::917f:6781:df6:f724%11(Preferred)     IPv4 Address. . . . . . . . . . . : 192.168.111.125(Preferred)     Subnet Mask . . . . . . . . . . . : 255.255.255.0     Default Gateway . . . . . . . . . : fe80::21e:f7ff:fe41:2a80%11                                         fe80::21e:f7ff:fe41:3540%11                                         192.168.111.1  

MAC Info on Windows Box

C:\Users\Administrator>getmac    Physical Address    Transport Name  =================== =========================================================  00-E0-81-DF-15-FE   \Device\Tcpip_{5BB4FA88-7056-4303-8528-AA2293E4821B}  00-E0-81-DF-15-FD   Media disconnected  

The ARP and MAC ADDRESS entry in the Router

Router#sh ip arp 192.168.111.61  Protocol  Address          Age (min)  Hardware Addr   Type   Interface  Internet  192.168.111.61            1   00e0.81df.15fc  ARPA   Vlan50      Router#sh mac-address-table addr 00e0.81df.15fc    Legend: * - primary entry      age - seconds since last seen      n/a - not available      vlan   mac address     type    learn     age              ports  ------+----------------+--------+-----+----------+--------------------------  Module 1[FE 1]:  *   50  00e0.81df.15fc   dynamic  Yes        275   Gi1/37  

The last 4 bits on the hardware address although similar were not existing physical hardware addresses on the Windows 2008 Server. Logic dictates that the Windows Server had to have performed some sort of incorrect gratuitous ARP in order to poison the ARP and MAC table on the router. Or it was responding to an ARP request for an IP that it didn't own and a MAC ADDRESS that it didn't own.

The second we shut down the Windows 2008 interface and cleared the ARP/MAC tables the problem was solved.

For the life of me i am unable to understand how this happened (or why).

Wake on Lan (WOL) stopped working

Posted: 15 Jan 2022 10:59 PM PST

My wake on lan stopped working for no apaprent reason.

I installed wireshark and from another machine sent a magic packet to the target computer and I could see the packet was coming through. Nothing in the BIOS changes, so

Cannot Access new Sharepoint 2013 Collection

Posted: 16 Jan 2022 12:05 AM PST

I just finished installing SharePoint 2013 on our dev machine. I'm in the CA, and I can create new site collections just fine. Problem is, I cannot access them from any account, including the designated collection admin account. I've been going around and around on this, but nothing seems to work, I just get the "Sorry, this site hasn't been shared with you".

Anyone know what I causing this? Logging into the CA works fine under any allowable account, and the security settings match for both IIS sites.

Access the site collection security page directly works for some reason (.../_layouts/15/settings.aspx), and if I view the site administrator page my account is even listed! Still no dice on access the actual SP collection though.

Nginx dynamic upstream configuration / routing

Posted: 16 Jan 2022 03:07 AM PST

I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql.

The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs.

The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP.

Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content.

Any ideas how to implement this in more flexible and stable way?

P.S. Some research options:

No comments:

Post a Comment