Tuesday, March 8, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


PEERING connectivity issues between VPC's?

Posted: 08 Mar 2022 06:29 AM PST

Cloud you help me about network infrastructure below referece "PEERING" ?

We have in you GCP Cloud 2(two) VPC

The First - "vpc-shared-nonprod" Projetc Name: "Shared" Subnet Name: subnet-shared-nonprod "10.1.0.0/24"

The Second "vpc-4i-shared-prod" Projetc Name: "Shared" Subnet Name: subnet-shared-prod "10.2.0.0/24"

We are not able to create PEERING between the Projects "vpc-shared-nonprod" "Shared - 10.1.0.0/24" and "vpc-shared-prod" "Shared - 10.2.0.0/24"

Ubuntu 20.04 false subnet per interface isc-dhcp-server

Posted: 08 Mar 2022 05:08 AM PST

This morning I have configure my interfaces

network:      version: 2      renderer: networkd      ethernets:          ens160:            dhcpd4: true            nameservers:              addresses: [8.8.8.8,8.8.4.4]          ens192:            dhcpd4: false            addresses: [192.168.0.1/24]           ens224:             dhcpd4: false             addresses: [192.168.10.1/24]  

And tried to configure my DHCP server:

subnet 192.168.0.0 netmask 255.255.255.0 {      interface ens192;      option domain-name "Tor.org";      range 192.168.0.2 192.168.0.252;      option routers 192.168.0.1;      option domain-name-servers 192.168.0.1;      option subnet-mask 255.255.255.0;  }      subnet 192.168.10.0 netmask 255.255.255.0 {      interface ens224;      option domain-name "Network.org";      range 192.168.10.2 192.168.10.252;      option routers 192.168.10.1;      option domain-name-servers 192.168.10.1;      option subnet-mask 255.255.255.0;  }  

on /etc/default/isc-dhcp-server I add this: INTERFACESv4: "ens192 ens224"

However, any subnet return from the dhcp server is from: 192.168.0.0/24 and I have no idea why, so I checked dhcp manual:

""" Please note that the current implementation assumes clients only have a single network interface. A client with two network interfaces will see unpredictable behavior. This is considered a bug, and will be fixed in a later release. It may be helpful to enable the one-lease-per-client parameter so that roaming clients do not trigger this same behavior. """

Do you have any idea ?

mount error(13): Permission denied on GNU/Linux

Posted: 08 Mar 2022 04:59 AM PST

I need help with mounting a windows shared drive on Linux. I tried searching for the solution on google and on this site, but unable to find one. I verified the user is having access to the shared drive.

NOTE: All the commands are being run with root.

sudo mount.cifs //domain.local/IT /mnt/share/ -o user=domain/username

Here is the error message i'm getting.

mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

SQS is not working for multiple ECS (fargate) instances

Posted: 08 Mar 2022 04:58 AM PST

I am using an application load balancer, under this ALB, a target group is provided. In this target group, two fargate ECS instances are running. These two instances use the same PHP docker image. When i upload an csv file, the tasks in the csv file will be moved to the SQS. Here the tasks are not passing to the SQS and no error messages were showing.So i changed the ECS instance number to 1 ( initially it was 2) then SQS is working fine. So how do i resolve this issue for multiple ECS containers. ECS cluster SQS

DNS lookup spam running up Route 53 bill

Posted: 08 Mar 2022 04:33 AM PST

Throughout 21 we saw around 15M DNS queries/month. In January 2022 we saw almost 300M and I didn't notice.... then February almost 1 TRILLION...... and I noticed because of the bill. Amazon isn't really helping yet I told them this is obviously spam.

This isn't application layer there is nothing I can do right...?

multiple interfaces match the same shared network dhcpd

Posted: 08 Mar 2022 04:03 AM PST

my netplan:

network:      version: 2      renderer: networkd      ethernets:          ens160:            dhcpd4: true            nameservers:              addresses: [8.8.8.8,8.8.4.4]          ens192:            dhcpd4: false            addresses: [192.168.0.1/24]           ens224:             dhcpd4: false             addresses: [192.168.10.1/24]  

my dhcpd.conf

subnet 192.168.0.0 netmask 255.255.255.0 {      interface ens192;      option domain-name "Ubuntu.org";      range 192.168.0.2 192.168.0.252;      option routers 192.168.0.1;      option domain-name-servers 192.168.0.1;      option subnet-mask 255.255.255.0;        }    

I have no idea why I get isc-dhcp-status: multiple interfaces match the same shared network dhcpd ens192 ens224.

Thank you.

nginx reverse proxy IP_adr/1881 to localhost:1881 proxy_pass

Posted: 08 Mar 2022 03:59 AM PST

I have read this post, and try many thing but i have issue with rewrite regex. here

I have many node.js processes as backends with always different port to access.

With Nginx reverse proxy in the same server i want to pass for exemple : https://my-site/1881 to http://127.0.0.1:1881 proxy_pass.

I can get 1881 from my-site/1881 but i have always at the end 127.0.0.1:1881/1881. Or Nginx error. I don't knows exactly how to delete /1881 with rewrite.

That i tried :

location ~ ^/(?<port>\d\d\d\d)$ {            #Ok            rewrite "^/[0-9]{4}(.*)$" $1 break;  #try and retry here          proxy_pass http://127.0.0.1:$port;   #Ok  }  

Thank you for your help, have a good day

VPN clients gets disconnected in my machine very frequently (2 or 3 times per hour). clients I'm using are FortiClient VPN and Cisco AnyConnect VPN

Posted: 08 Mar 2022 03:51 AM PST

VPN clients gets disconnected in my machine very frequently (2 or 3 times per hour). clients I'm using are FortiClient VPN and Cisco AnyConnect VPN.

Log file from FortiClient is as below,

3/8/2022 2:04:27 PM info sslvpn date=2022-03-08 time=14:04:26 logver=1 id=96600 type=securityevent subtype=sslvpn eventtype=status level=info uid=556CBFD17961472AB2443601856E703C devid=FCT8000116156976 hostname=DESKTOP-XXXXXX pcdomain=N/A deviceip=xxx.xxx.xxx.xxx devicemac=01-05-9b-3c-7v-00 site=N/A fctver=7.0.2.0090 fgtserial=FCT8000116156976 emsserial=N/A os="Microsoft Windows 10 Professional Edition, 64-bit (build 19041)" user=xxxxxxxxxxx@AzureAD msg="SSLVPN tunnel status" vpnstate=connected vpntunnel=vpn.xxxxxxxx.co.uk 3/8/2022 2:17:26 PM error sslvpn FortiSslvpn: 8888: error: poll_send_ssl -SSL_get_error(): 5, try:1 3/8/2022 2:17:26 PM error sslvpn FortiSslvpn: 8888: error: poll_send_ssl - WSAGetLastError():2745, try:1 3/8/2022 2:17:26 PM error sslvpn FortiSslvpn: 8888: error: poll_send_ssl -data size: 50, try:1 3/8/2022 2:17:26 PM error sslvpn FortiSslvpn: 8888: [handle_driver_read_event]: error: poll_send 3/8/2022 2:18:55 PM info sslvpn FortiSslvpn: 12472: Ras: connection to fortissl terminated

In windows events logs following messages appear,

Connectivity state in standby: Disconnected. Reason: Policy Setting 7026 - Dump after return from D3 after cmd The system is entering connected standby. Reason: Idle timeout I was not in idle state for long time when disconnecting.

There are no internet connectivity issues for me

Please add your thoughts for this issue. Thanks in advance.

mail from google computer engine not working, email defer delivery

Posted: 08 Mar 2022 03:46 AM PST

mail from cPanel installed google computer engine not working, email defer delivery error 220-appsunshine.cprapid.com ESMTP Exim 4.94.2 #2 Tue, 08 Mar 2022 21:27:17 +1000 220-We do not authorize the use of this system to transport unsolicited, 220 and/or bulk e-mail.

Specifying a rule to block RDP access on Windows Server 2019 except for a range of addresses

Posted: 08 Mar 2022 03:45 AM PST

I want to block access by an IP range to RDP on my Windows 2019 VPS.

Since I have a dynamic IP from my internet provider, I can't be certain what IP I will be using to access Remote Desktop myself (actually I use some other remote access software but want to keep Remote Desktop available just in case so I can still access the server if the other software doesn't work)

I'm guessing that at least the first number of the ip address e.g. 123.xxx.xxx.xxx - the 123 part.. would probably not change.

I found an article here about how to configure a rule for the firewall for RDP.

However, I'm uncertain how to specify a range.

How would I specify the range of all numbers starting with 123. ?

Windows ntp client not syncing with linux server

Posted: 08 Mar 2022 03:26 AM PST

I am trying to syncronyze the time of three computer on a local network. Although having the smallest drift/error possible with the world/internet would be grate, it is not my concert. My main concern is to have the best possible syncronization between the three computer.

To achieve this, I have set up one of the two Ubuntu machines (192.168.1.50) to act as an ntp server. I have done this by edit the ubuntu ntp server config file in /etc/ntp.conf and add:

server 127.127.1.0 iburst  fudge 127.127.1.0 stratum 10  

Then, I have checked that the other Ubuntu computer (192.168.1.71) is syncronized with it. First I have added server controlstation prefer iburst to the end of the /etc/ntp.conf and restarted the time service with sudo service ntp restart. After that, I can check that this two computers are properly time-synched by running ntpdate -q 192.168.1.50:

server 192.168.1.50, stratum 2, offset 0.001271, delay 0.02599   8 Mar 11:06:36 ntpdate[17648]: adjust time server 192.168.1.50 offset 0.001271 sec  

This seems to work properly, and 0.001271 offset is acceptable for my purpose. Next is to do the same with windows (192.168.1.201). First I check that the computers are in deed not synchronized:

w32tm /stripchart /computer:192.168.1.50  12:10:01, d:+00.0010124s o:-00.4908814s  [                          *|                           ]  12:10:03, d:+00.0005757s o:-00.4907188s  [                          *|                           ]  

Which makes sense as thw windows client is so far syncronized to time.windows.com:

w32tm /query /status  Leap Indicator: 0(no warning)  Stratum: 4 (secondary reference - syncd by (S)NTP)  Precision: -23 (119.209ns per tick)  Root Delay: 0.0386977s  Root Dispersion: 8.2445365s  ReferenceId: 0x33917B1D (source IP:  51.145.123.29)  Last Successful Sync Time: 3/8/2022 12:13:23 PM  Source: time.windows.com,9  Poll Interval: 10 (1024s)  

I changed the time server with w32tm /config /update /manualpeerlist:192.168.1.50,0x8 /syncfromflags:MANUAL and forced a resync w32tm /resync:

Sending resync command to local computer  The command completed successfully.  

Then, checked again the time difference between the ubuntu ntp server and this windows machine:

w32tm /stripchart /computer:192.168.1.50  Tracking 192.168.1.50 [192.168.1.50:123].  The current time is 3/8/2022 12:22:01 PM.  12:22:01, d:+00.0005075s o:-00.4568042s  [                          *|                           ]  12:22:03, d:+00.0010415s o:-00.4566323s  [                          *|                           ]  12:22:05, d:+00.0009737s o:-00.4569219s  [                          *|                           ]  

Which shows that the windows ntp client is clearly not synchronized with the ubuntu ntp server. However, if I check the status:

w32tm /query /status  Leap Indicator: 0(no warning)  Stratum: 3 (secondary reference - syncd by (S)NTP)  Precision: -23 (119.209ns per tick)  Root Delay: 0.0314761s  Root Dispersion: 8.2468633s  ReferenceId: 0xC0A80132 (source IP:  192.168.1.50)  Last Successful Sync Time: 3/8/2022 12:20:37 PM  Source: 192.168.1.50,8  Poll Interval: 10 (1024s)  

It clearly sas that the source is the right one (192.168.1.50) and that it was syncronized just before the query.

Postfix overwrites the sender

Posted: 08 Mar 2022 03:18 AM PST

So i'm having issues with my postfix server. It is a relay and it works with Linux machines (SUSE Leap 15.2 and SLES 12 SP5) but not with my Solaris machines (Solaris 8 ans Solaris 10).

Here is the command i type on the Solaris 8 machine :

mailx my.email@company.com  Subject: Solaris8Machine  Test Solaris 8 machine  

And here is the /var/log/mail of my postfix server :

connect from solaris8machine.localdomain.com[192.168.1.53]  478264A7B5: client=solaris8machine.localdomain.com[192.168.1.53]  478264A7B5: replace: header From: Super-User <root@solaris8machine.localdomain.com> from solaris8machine.localdomain.com[192.168.1.53]; from=<mailbox.localdomain.fr@company.com> to=<my.email@company.com> proto=ESMTP helo=<solaris8machine.localdomain.com>: From: solaris8machine <UNIX@localdomain>  

And my problem is here, it changes the normal sender which i need mailbox.localdomain.fr@company.com to my email address i'm trying to send an email to my.email@company.com.

I'm looking for hints because i know it's not a problem in my postfix configuration since it works with other machines in Linux.

"gcloud app deploy" hangs on "Building and pushing image for service"

Posted: 08 Mar 2022 03:00 AM PST

I suddenly can't deploy using gcloud app deploy.

It hangs on "Building and pushing image for service [default]". At that time, the Python process takes 99% CPU, and continues until the deploy times out. I've tried upgrading Python to no avail.

It occurs regardless of Google Appengine Project. Have tried installing different versions of gcloud CLI to no avail.

My teammates can deploy successfully using the same commands. Any ideas?

Access denied for virtual users.Proftpd

Posted: 08 Mar 2022 02:48 AM PST

I can't connect to ftp server as virtual user but i can connect as ubuntu user. I tried set permission for directory 777, 0755, 775 to /var/www/host but still get access denied

Here's my config: https://pastebin.com/Y3KWu8up

My virtual user home directory is /var/www/host

Why is Docker volume world-writable if set to /tmp?

Posted: 08 Mar 2022 04:29 AM PST

For the context :

docker --version      Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2  

test 1 : volume is /myvolume

Here's my Dockerfile

FROM alpine:latest  USER 1000:1000  VOLUME /myvolume  

and the build + run commands :

docker build -t myimage .  docker run --rm -it myimage  

then, once in the container :

/ $ whoami      whoami: unknown uid 1000  / $ ls -ld /myvolume/      drwxr-xr-x    2 root     root          4096 Mar  8 09:22 /myvolume/  / $ touch /myvolume/test      touch: /myvolume/test: Permission denied  

So far, this is no surprise the user with UID 1000 can't write to /myvolume.

test 2 : volume is /tmp

My Dockerfile

FROM alpine:latest  USER 1000:1000  VOLUME /tmp  

(same build + run commands), and in the container :

/ $ whoami      whoami: unknown uid 1000  / $ ls -ld /tmp      drwxrwxrwt    2 root     root          4096 Nov 24 09:20 /tmp  / $ touch /tmp/test  / $ ls -l /tmp      total 0      -rw-r--r--    1 1000     1000             0 Mar  8 09:23 test  

Now the volume has changed to /tmp, the user with UID 1000 can write in it.

I know /tmp is typically world-writable in GNU/Linux, but here, this looks "magical" (which is fine only when Harry Potter is around) and I'm wondering whether :

a) I'm missing something about how Docker and volumes work (please refer me to appropriate documentation / tutorials)

b) it's a coincidence due to my setup / something's missing to be explicit and stop relying on defaults

c) it's an undocumented feature that may change any time without notice

d) it's a feature I've not been able to find documentation about, and I can safely rely on the fact that when a volume is attached to /tmp, it is always world-writable

Is 250Mbs on a cheap VPS enough for 500 CCU listening to radio stream?

Posted: 08 Mar 2022 02:56 AM PST

I'd like to use a cheap VPS hosted by OVH, France (1 vCore, 2 GB RAM, 40 GB SSD NVMe, 250 Mbps unmetered) to host an icecast server which will be used for an event this month. There will be up to 500 CCUs listening to the 128 kbps audio stream.

based on my reading of this article, it seems to me that 250 Mbps should be enough to respond to the load, but i haven't got any experience in managing this kind of problem.

My reasoning is that 128kb*500CCU + 10% overhead = approx 70 Mb/s.

I'm also wondering if the 250 Mbps unmetered supplied by OVH are guaranteed, or whether the load on other services hosted by other clients using the machine could have an impact on performance. (I asked OVH already but they weren't especially helpful)

thank you for your insights! samuel

How do you mount a k8s service account token as an enviromnet variable?

Posted: 08 Mar 2022 05:04 AM PST

When you associate a service account to a pod, it gets mounted in the /var/run/secrets/kubernetes.io/ folder, but I don't see a way to add the secret as an environment variable. The issue is that setting up a reference in the pod to a service account's secret is not possible because the secret generated from service account has an auto generated name. So you can't use env.valueFrom.secretKeyRef in the pod config. Is there a way to do this without creating a secret manually?

GCP External HTTPs Load Balancer - 404 - 503 - SSL Exception (Remote host terminated connection, read handshake, socket closed & upstream connect)

Posted: 08 Mar 2022 05:07 AM PST

We're load testing a MIG (with 2 instances) hosted behind the HTTPs load balancer using JMeter.

Observation 1: We randomly receive 404 error and 503 error, for 404 we see an entry get created within load balancer monitoring NO_BACKEND_SELECTED (other than our actual MIG backend). Further, for 503 we see an entry get created within load balancer monitoring FRONTEND_5XX.

Based on GCP:

NO_BACKEND_SELECTED - An error or other interruption occurred before a backend could be selected. FRONTEND_5XX - An internal error occurred before the GFE could select a backend. The GFE returned 5XX to the client.  

The above statement doesn't assist with respect to troubleshooting or getting the same resolved or isolation of the cause for the issue. we didn't find anything w.r.t. these error messages within GCP docs or other articles.

Observation 2: We randomly receive random SSL exceptions; Remote host terminated connection, read handshake, socket closed & upstream connect @ JMeter's end.

Steps taken

  1. Changing Keep Alive on the backend servers to 620 sec (GFE has Keep Alive of 600 secs)
  2. Created custom SSL policy (minimum tls set to 1.1)
  3. Increased the backend timeout from default 30 to 65 seconds

So, we are looking @ what are we missing or what else can we fine-tune/modify for testing purposes in order to get the above mentioned issues resolved.

Thank you. Gaurav_N17

How to set logic to create multiple machines on azure using terraform?

Posted: 08 Mar 2022 03:14 AM PST

Below is the template I have for azure VM.

In Google cloud, we have option to set count for creating multiple machines, as I heard.

How to create multiple machines using a single template, so that based on variable value, those many number of machines should be created.

Sample template for azure windows server VM.

github url: link

I want to keep this repo permanently public, so not posting the direct files here.

SSL for devices in local network

Posted: 08 Mar 2022 06:17 AM PST

Initial question

We make devices which run a webserver and the user can control some functionality of the device by browsing directly to the IP of the device. This can be a fixed IP when a direct WiFi or ethernet connection is used but in most cases this is the IP that the device has received from a DHCP server in the network.

More and more HTTPS is required to access some of the more advanced functionality of a browser. For example to access cache (https://developer.mozilla.org/en-US/docs/Web/API/Cache), to allow the webcam to be used (https://blog.mozilla.org/webrtc/camera-microphone-require-https-in-firefox-68/), Service Workers (https://www.digicert.com/dc/blog/https-only-features-in-browsers/), ... The list keeps growing every day.

I'm all pro to have secure systems but I think there is one major issue. The way HTTPS (TLS) is set up a certificate is only marked as valid if the domain name matches the one in the certificate and the certificate authority is accepted by the client's browser, the chain of trust as it is called. This works beautifully on the web where fixed hostnames are used.

However when users are not using the internet but their local network the hostname is not known beforehand. Sometimes users can use local DNS, mDNS but this is not always the case. Many times users just use the internal IPv4 address. This is where the trouble begins because there are two options with using the devices we make:

  1. The user does not use HTTPS (we do not enforce it, read on to see why). The major browsers at this time do not give an explicit warning but mark the page as 'Not secure' in light grey. Most users don't even notice it and are very happy.
  2. The user uses HTTPS on the same device. Altough this makes there connection more secure the browsers are now telling them explicitly to use the device with extreme caution and that the connection is probably hacked and private data could be stolen. The site is now marked 'insecure' in red and the user must press 2 or 3 buttons to allow for a certificate exception.

Option number 2 is the cause that we do not force the devices to be accessed by HTTPS because it simply alarms to many users and floods customer service. Five years ago this was not really an issue because everything could be done without HTTPS. With more and more API's now only working in a 'Secure Context' this is really becoming a problem for us.

Therefore I think the need is becoming very big to come up with a system to use HTTPS without the hostname system, strictly in internal networks. I could imagine that the private IPv4 ranges could be excluded from the warnings or something more clever. This brings me to my question, do you face the same problems and how can this be solved?

Update 1

As pointed out in the first comment the now proposed solution is to use a wildcard certificate and to configure a DNS entry for the device on a public domain. This however has the issue that the client still requires an active internet connection. This is certainly not always the case in these kind of setups.

Update 2

I also found this article on Let's encrypt which talks about the same subject without giving a solution: https://letsencrypt.org/docs/certificates-for-localhost/

Update 3: hypothetical solution idea

After reading the below answers and comments I was thinking of a possible secure solution for the problem. Would the below setup (if it would be allowed) secure?

  1. Request an intermediate CA certificate from a trusted root CA which has Name Constraints which only allows it to create multiple intermediate CA's which can only create certificates for a single fixed hostname '*.mydevice.local' or something similar and which allows all private IPv4 addresses to be used in the SAN.
  2. Every deployed device would be factory installed with a unique intermediate CA created by the intermediate CA I was talking about in step 1. This on-device CA would than be name constrained on '.mydevice.local'.
  3. Every time that the device changes IP-address (boot, DHCP change, ...) it would than be able to generate a certificate with it's on-device intermediate CA.

I think this would solve the problem completely and have the following advantages:

  • No browser warnings because the chain of trust relays back to the trusted root CA.
  • Every device would have a unique certificate.
  • Compromise of a single intermediate CA would not be that big of an issue because it can only be used to create trusted certificate's for the device's specific fixed hostname.

Please comment if I overlook something.

Update 4:

I want to thank everyone for all the help and thinking along. The conclusion for me is that the whole idea behind certificates and the trust chain behind it doesn't allow what I want. This is because there is simply no way for a CA to be sure that the internal IP address I'm pointing to is uniquely owned by the device that I want to reach. An internal IP, for example 192.168.0.10, is owned by thousands devices and thus it is not possible to grant a certificate which allows browsers to show no warning display.

The only option is to do the certificate validation by manual intervention (installing the device certificate, pushing your own device's CA to the user, and the various more complex options as proposed in the answers). This is simply something I need to live with.

Nevertheless I think I'm going to open a ticket with Firefox and Chrome. Because I think that for internal IP-addresses a simple grey non-secure warning, as with HTTP, is more than enough of a warning. The red warnings should only be shown when making use of HTTPS in the use case it was designed for.

Update 5:

I have filed a bug report at Bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1705543 I'm posting this link as a reference so anyone can follow the issue.

What's the point of Azure Service Endpoint?

Posted: 08 Mar 2022 06:20 AM PST

I guess I'm missing something, but I just don't get Service Endpoints.

Let's say I have Azure SQL, and I want to secure it as much as possible. Now, I can use the Firewall IP rules to protect from unauthorized access from the public web.

This, if I get it right, has nothing do to with Service Endpoint.

So I can set an endpoint to connect, say, a VM in my subscription to the Azure SQL. But what's the difference if I do or don't have a service endpoint? From what I gathered, the service endpoint makes my resources access the SQL via Azure backbone instead of via the public IP. So that means that service endpoints has nothing to do with outside access, which is still protected using the Firewall's IP rules.

Is that correct?

Does service endpoint protect against Azure resources accessing using public IP?

I really feel I miss something...

Thanks!

iptables v1.8.2 (nf_tables): RULE_APPEND failed (Invalid argument): rule in chain OUTPUT

Posted: 08 Mar 2022 06:02 AM PST

on debian 10 trying to apply following iptable rules:

ip rule add fwmark 1 table 100  ip route add local 0.0.0.0/0 dev lo table 100      iptables -t mangle -N V2RAY  iptables -t mangle -A V2RAY -d 127.0.0.1/32 -j RETURN  iptables -t mangle -A V2RAY -d 224.0.0.0/4 -j RETURN  iptables -t mangle -A V2RAY -d 255.255.255.255/32 -j RETURN  iptables -t mangle -A V2RAY -d 192.168.0.0/16 -p tcp -j RETURN   iptables -t mangle -A V2RAY -d 192.168.0.0/16 -p udp ! --dport 53 -j RETURN   iptables -t mangle -A V2RAY -p udp -j TPROXY --on-port 12345 --tproxy-mark 1   iptables -t mangle -A V2RAY -p tcp -j TPROXY --on-port 12345 --tproxy-mark 1   iptables -t mangle -A PREROUTING -j V2RAY       iptables -t mangle -N V2RAY_MASK  iptables -t mangle -A V2RAY_MASK -d 224.0.0.0/4 -j RETURN  iptables -t mangle -A V2RAY_MASK -d 255.255.255.255/32 -j RETURN  iptables -t mangle -A V2RAY_MASK -d 192.168.0.0/16 -p tcp -j RETURN   iptables -t mangle -A V2RAY_MASK -d 192.168.0.0/16 -p udp ! --dport 53 -j RETURN   iptables -t mangle -A V2RAY_MASK -j RETURN -m mark --mark 0xff     iptables -t mangle -A V2RAY_MASK -p udp -j MARK --set-mark 1    iptables -t mangle -A V2RAY_MASK -p tcp -j MARK --set-mark 1     iptables -t mangle -A OUTPUT -j V2RAY_MASK  

but error at last:

 iptables v1.8.2 (nf_tables):  RULE_APPEND failed (Invalid argument): rule in chain OUTPUT  

How to route all VM traffic through specific physical interface over a Linux bridge?

Posted: 08 Mar 2022 03:03 AM PST

My objective is to have all KVM guest VMs send and receive traffic on em2 with addresses on the 192.168.2.0/24 subnet.

I have a host Linux machine (CentOS 7) with several NICs, 2 of which are in use in this scenario, em1 and em2.

The em1 interface has an IP of 192.168.0.131. The em2 interface has been attached to br0, so it doesn't have an IP itself, but br0 has been assigned an IP address of 192.168.2.1.

I have created a route on my Netgear firewall to direct 192.168.2.0/24 traffic to 192.168.2.1 but this address doesn't show as an attached device the way 192.168.0.131 does, maybe because it's a virtual Linux bridge.

From the host VM, I can ping both the "bridge gateway", the VM guest, and the firewall gateway to the internet:

[root@boss ~]# ping -c1 192.168.2.1  64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.085 ms    [root@boss ~]# ping -c1 192.168.2.10  64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=0.476 ms    [root@boss ~]# ping -c1 192.168.0.254  64 bytes from 192.168.0.254: icmp_seq=1 ttl=64 time=4.17 ms  

And from the guest VM, I can ping em1, but not the internet gateway, 192.168.0.254:

[root@localhost ~]# ping -c1 192.168.0.131  64 bytes from 192.168.0.131: icmp_seq=1 ttl=64 time=0.282 ms    [root@localhost ~]# ping -c1 192.168.0.254  PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data.    --- 192.168.0.254 ping statistics ---  1 packets transmitted, 0 received, 100% packet loss, time 0ms  

This is my config for em2:

DEVICE=em2  TYPE=Ethernet  ONBOOT=yes  BRIDGE=br0  

And br0:

DEVICE=br0  BOOTPROTO=none  ONBOOT=yes  TYPE=Bridge  IPADDR=192.168.2.1  PREFIX=24  GATEWAY=192.168.0.254  ZONE=public  STP=no  

My routing table on the VM host:

[root@boss ~]# route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.0.254   0.0.0.0         UG    0      0        0 em1  192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 em1  192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 br0  

The guest VM was started with virt-install:

virt-install \    --name vm-guest-1 \    --network bridge=br0 \    --virt-type kvm \  

Guest VM eth0:

DEVICE="eth0"  BOOTPROTO="none"  ONBOOT="yes"  TYPE="Ethernet"  IPADDR="192.168.2.10"  NETMASK=255.255.255.0  

And the guest VM routing table:

[root@localhost ~]# route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.2.1     0.0.0.0         UG    0      0        0 eth0  192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0  

As requested, my host bridge output:

[root@boss ~]# brctl show  bridge name     bridge id           STP enabled    interfaces  br0             8000.d4ae529de039   no             em2                                                     vnet0  

Question/Problem:

How do I / why can't I route into my guest VM, or rather, why can't my guest VM get out to the internet?

Sonicwall Global VPN user either can't reach internet, or LAN depending on Access List

Posted: 08 Mar 2022 05:04 AM PST

I have a Sonicwall running firmware 6.5.4.4-44n and have a standard VPN (not SSL-VPN) setup which I'm connecting to via the Global VPN Client for Windows. The WAN Group VPN is setup to be a "Split Tunnel" and I have both "Set Default Gateway as this Gateway" and "Apply VPN Control List" NOT checked (checking either doesn't seem to make a difference in the behavior)

What I would like to accomplish is users connected to the VPN can access the "X0 Subnet" (which is an Object defined as 10.0.0.0/255.255.255.0) through the VPN and the rest of the internet via their own external connection (NOT route internet traffic through the VPN).

That I've found is my users can either:

  1. Access the internet, but not the LAN if I set the user "VPN Access" to be "X0 Subnet" and nothing else
  2. Access the LAN, but not the internet if I set the user "VPN Access" to "WAN RemoteAccess Networks" (which is defined as 0.0.0.0/0.0.0.0

Perhaps I'm missing what "VPN Access" means, but this seems like the opposite behavior as what I would expect. (Giving "X0 Subnet" access results in the user not being able to access the "X0 Subnet"). I've been trying different configurations and following various internet posts for the past 2 days without making any progress. Does anyone have an idea of what is going on here?

With "LAN Networks" in the access list, here is my client route map. My (non VPN client network is 10.0.2.0/24. The remote network I'm trying to access is 10.0.0.0/24, which is in the "LAN Subnets" list)

route print  ===========================================================================  Interface List    7...00 60 73 0e 22 ad ......SonicWALL Virtual NIC    5...08 00 27 be f3 85 ......Intel(R) PRO/1000 MT Desktop Adapter    1...........................Software Loopback Interface 1  ===========================================================================    IPv4 Route Table  ===========================================================================  Active Routes:  Network Destination        Netmask          Gateway       Interface  Metric            0.0.0.0          0.0.0.0         10.0.2.2        10.0.2.15     25           10.0.0.0    255.255.255.0         On-link        10.0.0.213    257         10.0.0.213  255.255.255.255         On-link        10.0.0.213    257         10.0.0.255  255.255.255.255         On-link        10.0.0.213    257           10.0.2.0    255.255.255.0         On-link         10.0.2.15    281          10.0.2.15  255.255.255.255         On-link         10.0.2.15    281         10.0.2.255  255.255.255.255         On-link         10.0.2.15    281      33.33.171.50  255.255.255.255         10.0.2.2        10.0.2.15     25      33.33.171.50  255.255.255.255         On-link        10.0.0.213      2          127.0.0.0        255.0.0.0         On-link         127.0.0.1    331          127.0.0.1  255.255.255.255         On-link         127.0.0.1    331    127.255.255.255  255.255.255.255         On-link         127.0.0.1    331          224.0.0.0        240.0.0.0         On-link         127.0.0.1    331          224.0.0.0        240.0.0.0         On-link         10.0.2.15    281          224.0.0.0        240.0.0.0         On-link        10.0.0.213    257    255.255.255.255  255.255.255.255         On-link         127.0.0.1    331    255.255.255.255  255.255.255.255         On-link         10.0.2.15    281    255.255.255.255  255.255.255.255         On-link        10.0.0.213    257  ===========================================================================  Persistent Routes:    None    IPv6 Route Table  ===========================================================================  Active Routes:   If Metric Network Destination      Gateway    1    331 ::1/128                  On-link    5    281 fe80::/64                On-link    7    281 fe80::/64                On-link    7    281 fe80::6520:9f25:dd7:33ee/128                                      On-link    5    281 fe80::bd8b:6045:f79a:1ff9/128                                      On-link    1    331 ff00::/8                 On-link    5    281 ff00::/8                 On-link    7    281 ff00::/8                 On-link  ===========================================================================  Persistent Routes:    None  

Thanks in advance

linux + tput: No value for $TERM and no -T specified

Posted: 08 Mar 2022 05:04 AM PST

I use in my bash script the tput command in order to colored the text

as

tput setaf 2  

when I run the script from putty or console every thing is ok

but when I run some external WIN application engine that run the script via SSH the we get the following error on tput

tput: No value for $TERM and no -T specified  tput: No value for $TERM and no -T specified  tput: No value for $TERM and no -T specified  tput: No value for $TERM and no -T specified  

please advice what need to set ( ENV or else ) in my bash script in order to use the tput command ?

what value need to set for $TERM ( in my bash script ) ?

proxy_fcgi:error (70008)Partial results are valid but processing is incomplete. AH01075

Posted: 08 Mar 2022 03:30 AM PST

I have a server running with:

  • Ubuntu 16.04
  • Apache 2.4.18
  • WORKER-MPM
  • PHP 7.0.8-0ubuntu0.16.04.3
  • PHP-FPM
  • OPcache 7.0.8-0ubuntu0.16.04.3

On the browser there is an ajax script that each 5 sec sends a query to a php file to update a timestamp on the DB, this script works well on other servers, but here with not so many users it log the following error:

[Mon Dec 05 09:11:39.575035 2016] [proxy_fcgi:error] [pid 7831:tid 140159538292480] (70008)Partial results are valid but processing is incomplete: [client 172.30.197.200:64422] AH01075: Error dispatching request to : (reading input brigade), referer: http://10.200....file.php

I have no idea what it is and how to fix it. I have searched the entire web and I didn't find much, any hint would be appreciated.

Edit 1:

I switch the error mode to debug and the full log for the error is this:

[Wed Dec 07 08:55:13.465599 2016] [authz_core:debug] [pid 5461:tid 139687427467008] mod_authz_core.c(809): [client 172.31.42.163:54432] AH01626: authorization result of Require all granted: granted, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465613 2016] [authz_core:debug] [pid 5461:tid 139687427467008] mod_authz_core.c(809): [client 172.31.42.163:54432] AH01626: authorization result of <RequireAny>: granted, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465634 2016] [proxy:debug] [pid 5461:tid 139687427467008] mod_proxy.c(1160): [client 172.31.42.163:54432] AH01143: Running scheme unix handler (attempt 0), referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465640 2016] [proxy_fcgi:debug] [pid 5461:tid 139687427467008] mod_proxy_fcgi.c(879): [client 172.31.42.163:54432] AH01076: url: fcgi://localhost/var/www/html/sala.server.php proxyname: (null) proxyport: 0, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465652 2016] [proxy_fcgi:debug] [pid 5461:tid 139687427467008] mod_proxy_fcgi.c(886): [client 172.31.42.163:54432] AH01078: serving URL fcgi://localhost/var/www/html/sala.server.php, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465658 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2160): AH00942: FCGI: has acquired connection for (*)

[Wed Dec 07 08:55:13.465663 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2213): [client 172.31.42.163:54432] AH00944: connecting fcgi://localhost/var/www/html/sala.server.php to localhost:8000, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465668 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2250): [client 172.31.42.163:54432] AH02545: fcgi: has determined UDS as /run/php/php7.0-fpm.sock, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465735 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2422): [client 172.31.42.163:54432] AH00947: connected /var/www/html/sala.server.php to httpd-UDS:0, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465771 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2701): AH02823: FCGI: connection established with Unix domain socket /run/php/php7.0-fpm.sock (*)

[Wed Dec 07 08:55:13.480503 2016] [proxy_fcgi:error] [pid 5461:tid 139687427467008] (70008)Partial results are valid but processing is incomplete: [client 172.31.42.163:54432] AH01075: Error dispatching request to : (reading input brigade), referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.480533 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2175): AH00943: FCGI: has released connection for (*)

TFS BuildHttpClient UpdateDefinition C# example

Posted: 08 Mar 2022 04:03 AM PST

I need to update a vNext Build Definition programmatically. The reason for the need to programmatically update the build definition is that we are running the RTM version of Team Foundation Server 2015, and as of that release certain parts of the vNext Build Definitions are not exposed to the web GUI, and there is (as yet) no other way to change them. (Assuming that you want to keep your database in a supported state, and refuse to modify the database directly.)

Our corporate environment and all machines recently went through a domain change. The TFS server was moved to the new domain with no issues. However, the vNext Build definition has an internal reference to the old server name in the URL field, which still has the old domain name inside it.

So far, I have the following code which should update the URL of each build definition of a certain project. The call to GetDefinitonsAsync clearly returns the proper build DefinitionReferences to me, but UpdateDefinitionAsync does not seem to have any effect.

   List<DefinitionReference> bds = new List<DefinitionReference>();  .  .  .     {          Uri tfsURI = new Uri("http://<tfsserver>:8080/tfs/<collection>");          WindowsCredential wc = new WindowsCredential(true);          BuildHttpClient bhc = new BuildHttpClient(tfsURI, new VssCredentials(wc));            var task = Task.Run(async () => { bds = await bhc.GetDefinitionsAsync(project: "projectname"); });          task.Wait();            foreach (var bd in bds)          {              BuildDefinition b = (BuildDefinition)bd;              b.Url = b.Url.Replace("<server>.<olddomain>", "<server>.<newdomain>");                var task1 = Task.Run(async () => { await bhc.UpdateDefinitionAsync(b); });              task1.Wait();          }        }  

This code snippet compiles and runs without error. However, when I examine the build definition afterward, it has not been updated and remains as before. There are no exceptions seen by the debugger, and there are no event viewer or DebugView messages of relevance.

Regarding the above code snippet, I am uncertain about whether I am suppose to obtain the BuildDefinition that I need to pass to UpdateDefinition by casting the DefinitionReference (subclass) to BuildDefinition or not, but I see nothing close in the BuildHttpClient class that will give me a BuildDefiniton from a DefinitonReference.

Any help would be appreciated. Thanks!

Regex nginx location with named location

Posted: 08 Mar 2022 04:03 AM PST

I have the following set up - a production version of some software;

location @myradio {      rewrite ^/myradio/([^/]+)/([^/]+)/?  /myradio/index.php?module=$1&action=$2 last;      rewrite ^/myradio/([^/]+)/?          /myradio/index.php?module=$1 last;  }    location /myradio {      alias /usr/local/www/myradio/src/Public;      try_files $uri $uri/ @myradio;        location ~ \.php {          fastcgi_index   index.php;          include         fastcgi_params;          fastcgi_param   SCRIPT_FILENAME    $request_filename;          fastcgi_pass    php5-fpm;      }  }  

and several development versions - there's also myradio-lordaro, among others.

location @myradiodev {      rewrite ^/myradio-([^/]+)/([^/]+)/([^/]+)/?  /myradio-$1/index.php?module=$2&action=$3 last;      rewrite ^/myradio-([^/]+)/([^/]+)/?          /myradio-$1/index.php?module=$2 last;  }    location /myradio-dev {      alias /usr/local/www/myradio-dev/src/Public;      try_files $uri $uri/ @myradiodev;        location ~ \.php {          fastcgi_index   index.php;          include         fastcgi_params;          fastcgi_param   SCRIPT_FILENAME    $request_filename;          fastcgi_pass    php5-fpm;      }  }  

Both of these work perfectly fine, but copying out the same /myradio-* config several times seems inefficient and I feel like I can do better.

Is it possible to generalise the development configs into one that uses regex to redirect nginx to the correct location? The @myradiodev is used successfully for all dev versions, so I don't believe that's the issue, but my own attempts to do it have just resulted in various 403 or 404 errors, with no clear idea where nginx is trying to access.

[Other recommendations as to how to clean this up appreciated (was originally converted from an apache config)]

Can't request computer certificate

Posted: 08 Mar 2022 06:02 AM PST

I am using MMC with the snaping of certificates. I am requesting certificates from a brand new installation of a CA.

Requesting User certificates works perfectly. Requesting Computer certificates fails and says the RPC service is unavailable.

What should I check?

Encryption on Solaris (using Keystore)

Posted: 08 Mar 2022 03:03 AM PST

I am trying do draft up a secure way to encrypt (on the fly, invoking it from an app) and decrypt sensible information (credit cards) using AES-256.

The target platform is:

cat /etc/release

Solaris 10 10/09 s10s_u8wos_08a SPARC

The optimal solution would be to be able to save the keys inside a Key Store, and use encrpyt/decrypt (paired with UUENCODE so that the resulting encrypted string can be saved inside a normal DB field).

We have succesfully tested the whole chain using just AES-128 (out-of-the-box with a basic Solaris install) and we understand we need to upgrade the target env. with the correct Solaris package to get to AES-256 [SUNWcry package - the unbundled Solaris Data Encryption Kit].

What escapes me is how to make "encrypt" access a key from the keystore. Oracle documentation mentions "-K" as a command line parameter (note this is an uppercase K) to do this (see here, for example), but the "-K" switch seems not to be available on our test machine.

Is this possible? Is this linked to the specific Solaris version? If not, can we obtain this by installing something else? (We haven't yet installed the crypto package to get to AES-256 so no idea if this will come "for free" with that).

No comments:

Post a Comment