Monday, November 29, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


What hardware does Windows Server 2019 Hyper-v use?

Posted: 29 Nov 2021 09:46 AM PST

I am an IT at a video gaming company. The game designing team here want to use a software which only has one physical computer license (ZBrush, Substance, Photoshop). While we are not trying to violate any software license terms (software will be used by multiple users on a single machine, one user at a given time).

They asked me if there is a way that can help all users to use the software while remaining on their own desk without the need of buying a new laptop which can be passed through the users. Installing the software on a portable SSD is not an option too. I was thinking if I can create a Hyper-V virtualized computer for them and install the license onto that. And whosoever needed to use the software can log into that computer and use it. But my question was, what hardware does the Hyper-V use, the server's or the computer the client is working on? Our game designers are going to use pretty hectic softwares, which I believe also requires the touch pen signal to go through. Do you think Hyper-V can support it and it will be good idea? Or what other solution can I suggest them without buying additional licenses?

How to implement mTLS between two separate Istio service meshes?

Posted: 29 Nov 2021 09:29 AM PST

I have two separate Istio service meshes. Service A running in Service Mesh 1 needs to call Service B running in Service Mesh 2. I want all calls to happen using mTLS. Can anyone tell me how to implement this?

Need to mount GB in my vps now excluded in /dev/vda1

Posted: 29 Nov 2021 09:22 AM PST

i'm loosing my mind through documentations, guides and so to understand/solve my problem about actually my VPS filesystem.

I bought a vps from edis with 100GB space and a centos7 installation on it installed from their automatisms.

Actually from a df -h command i can see this structure

Filesystem      Size  Used Avail Use% Mounted on  devtmpfs        3.9G     0  3.9G   0% /dev  tmpfs           3.9G     0  3.9G   0% /dev/shm  tmpfs           3.9G   65M  3.8G   2% /run  tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup  /dev/vda1       4.3G  3.2G  949M  78% /  tmpfs           783M     0  783M   0% /run/user/0  

so i have only 4.3GB that are usable actually, but seeing output fdisk -l command i have:

Disk /dev/vda: 107.4 GB, 107374182400 bytes, 209715200 sectors  Units = sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk label type: dos  Disk identifier: 0x000caf91       Device Boot      Start         End      Blocks   Id  System  /dev/vda1   *        2048     9422847     4710400   83  Linux  /dev/vda2         9422848    10289151      433152   82  Linux swap / Solaris  

There is unpartitioned space on /dev/vda that could be added to the volume that is mounted on / ( /dev/vda1 ) . So what i need to solve?

I think that i should..

  • create a new partition on /dev/vda with fdisk ( for example )
  • make the new partition into a volume in the system as /dev/vda3
  • and then...?

thanks for any help !

Postfix no longer rejecting emails based on spam block lists

Posted: 29 Nov 2021 09:26 AM PST

My postfix server is configured to reject emails based on a couple of spam block lists administered by spamhaus and spamcop.

After noticing that I've been receiving more spam than normal recently, I've discovered from logs that the last time an email was rejected based on a postiive result from either of these services was a week ago. I've made no changes to my postfix configuration for some time so nothing should have changed on the server.

I've run the tests here - https://blt.spamhaus.com/ and they are all getting through, which confirms to me that emails are not getting rejected as they should. Plus, I've checked the block list for the sending domains of a couple of the spam emails I've recevied and they are present, so should have been rejected.

I'm at a bit of a loss on how to troubleshoot this any further. There doesn't seem to be anything in the postfix logs that says "I'm not checking this block list because..." How can I find the root cause of this problem?

My smtp recipient restrictions are as follows:

smtpd_recipient_restrictions =   permit_mynetworks   check_sender_access          hash:/etc/postfix/sender_access   reject_unauth_destination   reject_unauth_pipelining   reject_invalid_hostname   reject_non_fqdn_sender   reject_unknown_sender_domain   reject_non_fqdn_recipient   reject_unknown_recipient_domain   reject_rbl_client bl.spamcop.net   reject_rbl_client zen.spamhaus.org   reject_rbl_client dul.dnsbl.sorbs.net   permit  smtpd_reject_unlisted_sender = yes  

Output of postconf -n:

alias_database = hash:/etc/aliases  alias_maps = hash:/etc/aliases  biff = no  command_directory = /usr/sbin  daemon_directory = /usr/lib/postfix/sbin  disable_vrfy_command = yes  home_mailbox = Mail/  mailbox_command = /usr/lib/dovecot/deliver  mailbox_size_limit = 0  message_size_limit = 20480000  mydestination = b3.localdomain, localhost.localdomain, localhost, /etc/postfix/bubbadomains, $myhostname  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  recipient_delimiter = +  relayhost = smtp.gmail.com  sender_bcc_maps = hash:/etc/postfix/sender_bcc  smtp_sasl_auth_enable = yes  smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd  smtp_sasl_security_options = noanonymous  smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache  smtp_use_tls = yes  smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)  smtpd_discard_ehlo_keywords = silent-discard, dsn  smtpd_helo_required = yes  smtpd_recipient_restrictions = permit_mynetworks check_sender_access hash:/etc/postfix/sender_access reject_unauth_destination reject_unauth_pipelining reject_invalid_hostname reject_non_fqdn_sender reject_unknown_sender_domain reject_non_fqdn_recipient reject_unknown_recipient_domain reject_rbl_client bl.spamcop.net reject_rbl_client zen.spamhaus.org reject_rbl_client dul.dnsbl.sorbs.net permit  smtpd_reject_unlisted_sender = yes  smtpd_relay_restrictions = permit_mynetworks check_sender_access hash:/etc/postfix/sender_access reject_unauth_destination reject_unauth_pipelining reject_invalid_hostname reject_non_fqdn_sender reject_unknown_sender_domain reject_non_fqdn_recipient reject_unknown_recipient_domain reject_rbl_client bl.spamcop.net reject_rbl_client zen.spamhaus.org reject_rbl_client dul.dnsbl.sorbs.net permit  smtpd_tls_cert_file = /etc/letsencrypt/live/mydomain.co.uk/fullchain.pem  smtpd_tls_key_file = /etc/letsencrypt/live/mydomain.co.uk/privkey.pem  smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache  smtpd_use_tls = yes  unknown_local_recipient_reject_code = 550  

Is rate limiting delay included in nginx $request_time?

Posted: 29 Nov 2021 08:55 AM PST

If requests are delayed by nginx rate limiting (rate limit exceeded, but within burst rate), is this delay included in the nginx $request_time total?

The nginx docs state that $request_time is "time elapsed since the first bytes were read from the client"

If a request is delayed, is that before or after request bytes are read from the client? I assume after, since rate limiting can be based on request headers, etc.

Is there a way to separate total request time and time spent specifically on sending/receiving network communication to/from the client?

Note: I am aware of $upstream_response_time, and am logging that. I specifically am concerned with differentiating specific operations within nginx (caching, rate limiting, etc), and client-side network communications.

EC2 instance running Ubuntu as a router to Wireguard network

Posted: 29 Nov 2021 08:20 AM PST

I have one machine in AWS EC2 running Ubuntu 16.04 (B) with Wireguard running as a VPN server for some Road Warrior devices (C).

I'll try to sketch it below:

+-----+                              +-----+                            +-----+  |     | ---------------------------> |     | -------------------------> |     |  |  A  | 172.30.0.5/16  172.30.0.6/16 |  B  | 10.70.0.1/24  10.70.0.2/32 |  C  |  |     | ens5                    eth0 |     | wg0                    wg0 |     |  +-----+                              +-----+                            +-----+  

I want to route traffic addressed to 10.70.0.0/24 from (A) to (C) via (B).

I tried following config:

On host (A):

ip route add 10.70.0.0/24 via 172.30.0.6  

EC2 security group allows all trafic to and from 172.16.0.0/12.

On host (B):

sysctl net.ipv4.ip_forward=1  ufw allow from 172.16.0.0/12  ufw route allow out on wg0  iptables -t nat -A POSTROUTING -s 172.16.0.0/12 -o wg0 -j MASQUERADE  

EC2 security group allows all trafic to and from 172.16.0.0/12.


I even tried setting DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw.

I'm out of ideas what else is missing here, I can't get any packets to pass through. On host (B) iptables doesn't see any packets going through its FORWARD chain:

iptables -nv -L FORWARD  Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination      0     0 ufw-before-logging-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ufw-before-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ufw-after-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ufw-after-logging-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ufw-reject-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0      0     0 ufw-track-forward  all  --  *      *       0.0.0.0/0            0.0.0.0/0  

Unable to finalize Container Registry Transition to Artifact Repository

Posted: 29 Nov 2021 07:53 AM PST

I tried to create a Container Registry and it asked me to upgrade to the Artifact repository. When I tried to transition, the Finalize button did not work. It keeps on loading and loading. I tried using Safari and Chrome but was unable to make it work.

DNS - delegation - broadcast and network IP adresses

Posted: 29 Nov 2021 08:36 AM PST

Is it a problem if I delegate network and broadcast addresses as part of network delegation in DNS?

I am following instructions for classless delegation of network on Zytrax (3.3 Reverse Map Delegation).

In the example from Zytrax (bellow) it's mentioned that all addresses except network and broadcast need to be defined.

; definition of our target 192.168.23.64/27 subnet   ; name servers for subnet reverse map  64/27         IN  NS  ns1.example.com.  64/27         IN  NS  ns2.example.com.  ; IPs addresses in the subnet - all need to be defined  ; except 64 and 95 since they are the subnets  ; network and broadcast addresses not hosts/nodes  65            IN  CNAME   65.64/27.23.168.192.IN-ADDR.ARPA. ;qualified  66            IN  CNAME   66.64/27 ;unqualified name  ..  ..  

I understand that I don't have to delegate network and broadcast addresses.

However, if I already have delegations with network and broadcast addresses in my zone files, is it OK to leave them as they are or should I fix it to avoid problems?

Issue compiling with AWS Codebuild (vue.js project)

Posted: 29 Nov 2021 08:20 AM PST

i'm trying to compile a vue.js project using AWS Codebuild, but it gets stuck in the build phase. It gives me this error (running with sudo):

[Container] 2021/11/26 18:06:02 Running command sudo npm install /codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: sudo: not found

[Container] 2021/11/26 18:06:02 Command did not exit successfully sudo npm install exit status 127 [Container] 2021/11/26 18:06:02 Phase complete: BUILD State: FAILED [Container] 2021/11/26 18:06:02 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: sudo npm install. Reason: exit status 127

I don't know if i have configured in a wrong way the Codebuild settings.

And it gives me this error (without sudo):

[Container] 2021/11/29 15:06:08 Running command npm run build    > company@0.1.0 build /codebuild/output/src868393770/src  > vue-cli-service build    sh: 1: vue-cli-service: Permission denied  npm ERR! code ELIFECYCLE  npm ERR! errno 126  npm ERR! company@0.1.0 build: `vue-cli-service build`  npm ERR! Exit status 126  npm ERR!   npm ERR! Failed at the company@0.1.0 build script.  npm ERR! This is probably not a problem with npm. There is likely additional logging output above.    npm ERR! A complete log of this run can be found in:  npm ERR!     /root/.npm/_logs/2021-11-29T15_06_11_998Z-debug.log    [Container] 2021/11/29 15:06:12 Command did not exit successfully npm run build exit status 126  [Container] 2021/11/29 15:06:12 Phase complete: BUILD State: FAILED  [Container] 2021/11/29 15:06:12 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: npm run build. Reason: exit status 126  

Or if i'm using the wrong commands to compile it. The buildspec.yml is this:

version: 0.2    phases:     build:       commands:         - echo Build Phase         - sudo npm install         - sudo npm run build     post_build:       commands:         - echo PostBuild Phase         - aws s3 sync ./dist $S3_BUCKET  

Is there any way to limit EXIM email size without WHM access? [closed]

Posted: 29 Nov 2021 08:17 AM PST

I have my client's website files stored on a shared service on a server that I do not own. As such, I have no access to the root directly.

The service provider does not want to alter the email limit. To be specific, they don't want to limit it at all. :-/

What I want to do is to set the email size limit on my client's account. If they don't have a limit, they will try to send 100GB emails. They're like a vicious dog with no leash.

So I believe that my real question is:

Is there any way to alter the EXIM config without root access?

Please let me know if I have omitted any relevant information. Thanks in advance!

How to send/broadcast ipv6-mac maping cache update request for IPv6 IP

Posted: 29 Nov 2021 06:44 AM PST

We can update IPv4 neighbors by using arping command. I have used arping -A -I -c <interface_name> <IP_address_of_interface> with success.

what is the command to update mapping of IPv6 address and mac on router/gateway/nodes. we have observed when IPv6 address is removed from one node N1(RHEL-7.9 Node) and assigned to other node N2(RHEL-7.9 Node), mac address on router(Extreme Networks VDX 8770) dont get updated. It eventually gets updated but that time is not consistent. for this duration N2 is not reachable to gateway.

dovecot Error: No relay host configured for submission proxy (submission_relay_host is unset) after upgrade to version >= 2.3.0

Posted: 29 Nov 2021 09:17 AM PST

I find this occurs because of a new feature of dovecot in versions >= 2.3.0 So all I have to do is add "submission" to protocols I lmtpd.protocol and pop3d.protocol in /usr/share/dovecot/protocols.d I don't know if lmtpd.protocol is the right file to add "submission" to protocols Next I'm supposed to "configure the relay MTA server." In /etc/postfix/main.cf relayhost = [mail.isp.example]:587 What should I put here, the machine I'm configuring is already my mail server itself What am I supposed to proxy to?

Using URL's with special characters in nginx maps

Posted: 29 Nov 2021 09:20 AM PST

When using nginx and maps it is possible to rewrite mutiple URL's with a map file. What is problematic is when the URL contains special characters. I have been breaking my head trying to get this right, and hope this Question / Solution might save others from becoming gray hair.

Let's set the scenario.

A Linux server (Debian/Ubuntu) running standard nginx. DNS pointing to this server that resolves to a server config. A Map that contains no duplicate entries with incoming and outgoing URL's (resolvable)

The map setup would contain the following:

map $host$request_uri $rewrite_uri {      include /<path to file filename>;  }  

the map file itself contains one entry per line terminated with a semicolon.

example.com/Böhme https://anotherexample.org/SomeWeirdPath/Böhme;  

The server config for this mapping to work

server {      listen 443 ssl http2;      ssl_certificate /<absolute path to crt file>;      ssl_certificate_key /<absolute path to key file>;      server_name example.com;      proxy_set_header X-Forwarded-For $remote_addr;      ssl_protocols TLSv1.2 TLSv1.3;      ssl_ciphers HIGH:!aNULL:!MD5;      ssl_dhparam <absolute path to Diffie Hellman key>;      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";      server_tokens off;      if ($rewrite_uri) {              rewrite ^ $rewrite_uri redirect;      }      rewrite ^ <default URL> redirect;  }  

I have simplified the config of this server config so we can concentrate on the map settings. The config assume that the domain will be using SSL and the certificate is valid. The if statement will only execute if the $host$request_uri is in the list with a $rewrite_uri, otherwise the last rewrite will be executed.

The Question

How do I transform the $request_uri so that nginx understand it correctly? The map file contains the value in UTF8, but it seems that nginx wants the $request_uri URL-Encoded and in Hexadecimal.

$request_uri as in the mapfile

example.com/Böhme

$request_uri URLEncoded as per Browser

example.com/B%C3%B6hme

$request_uri as I think nginx wants it

example.com/B\xC3\xB6hme

I can't seem to find a system package that has this feature, but I think I am starting to re-invent the wheel here.

I would need to:

create a function that will URL encoding the list, as per How to decode URL-encoded string in shell?

function urldecode() { local i="${*//+/ }"; echo -e "${i//%/\\x}"; }  

and then use Octal dump as per Convert string to hexadecimal on command line, so the map bucket is created in memory with the correct values for the if statement test.

It's starting to feel like rocket science, and I can't believe that nobody else hasn't solved this problem before, I just can't seem to find a solution.

Docker compose - disable default gateway route

Posted: 29 Nov 2021 07:20 AM PST

Is it possible to prevent docker from defining default route when using docker-compose yaml file?

If my docker-compose.yaml defines network ipam with default driver and any subnet, seams like docker (or docker compose) automatically assigns default route to the routing table of the docker that is attached to this network). Is there any way to disable it?

Permissions of /run/php-fpm/www.sock getting reset to root when php-fpm restarts after fixing AH02454 permission denied error

Posted: 29 Nov 2021 07:03 AM PST

I am migrating to a new server to upgrade my internals and I have encountered this error when standing up my apache and PHP

[Fri Apr 09 16:51:26.243820 2021] [proxy:error] [pid 31179:tid 140021109556992] (13)Permission denied: AH02454: FCGI: attempt to connect to Unix domain socket /run/php-fpm/www.sock (*) failed  [Fri Apr 09 16:51:26.243868 2021] [proxy_fcgi:error] [pid 31179:tid 140021109556992] [client 47.213.222.69:56165] AH01079: failed to make connection to backend: httpd-UDS  

The /run/php-fpm/www.sock file does exist, but it has root:root permissions. My webserver runs under a user that is not the default apache (the user is sites)

After much searching I found this article PHP-FPM - Error 503 - Attempt to connect to Unix domain socket failed and discovered that the /run/php-fpm/www.sock file needs to be chowed to the same user that runs httpd. So I did $chown sites: /run/php-fpm/www.sock and everything started working.

However, if the php-fpm service is restarted the permissions revert to root:root and PHP pages return 503

So I checked in /etc/php-fpm.d/www.conf and updated the lines:

user = sites  group = apache   .   .   .  listen.owner = sites  listen.group = apache  

I chowned the www.sock file again, but when the php-fpm service is restarted it still reverts the permissions of the www.sock file back to root:root

So I am stumped, and there seems to be very little information about this error to be found in my searching. And I know that with the chown command I can resolve the issue, however if my server ever needs to be restarted in the future, I doubt I will remember to do that unless I add an @reboot cron or something, but I shouldn't have to do that. I must be missing some configuration somewhere, I just can't find it.

My system information: Centos 8 Stream, PHP 7.2.24, Apache 2.4.37

Windows Server 2019 Prevents Freshly Compiled DLL to be Saved Within User's Documents

Posted: 29 Nov 2021 07:02 AM PST

An application which reads C# source code and compiles it to a DLL is throwing and error when trying to save this DLL to disk within the user's Documents folder, e.g "c:\Users\<user>\Documents\myapplication\some-folder\new.DLL", the application throws an exception which is caused by Windows Server 2019 claiming that the "path-does-not-exist".

Let me assure you, the path does exist:

  • What works sometimes: Add the user to group "Power Users"

  • What works always: Add the user to group "Administrators"

The latter is (should not be) not an option.

  • Windows Server 2019
  • The application is run by the user that owns the "Documents" folder
  • The application can create, rename, delete, read, write any other files or folders
  • The folder in question is exempted from antivirus, defenders, etc.

My educated guess that the application's behaviour can be seen as malicious (which it is not!, it's a game that allows mission scripting in c# and uses that technique for speed) and something tries to protect something else here. But I do not know what and how to stop it.

arch linux on zfs root cannot configure grub on bios

Posted: 29 Nov 2021 08:04 AM PST

as the title suggests I cannot get across the finish line installing arch on zfs. I get to the point where I try and install grub on my /boot after chrooting into my /mnt from the live cd. anyway here is my command and error:

# nvim /etc/grub.d/40_custom  
set timeout=5  set default=0    menuentry "Arch Linux" {     search -u UUID     linux /vmlinuz-linux zfs=rpool/ROOT/default rw     initrd /initramfs-linux.img  }  

Then I try and make my grub via:

# ZPOOL_VDEV_NAME_PATH=1 grub-mkconfig -o /boot/grub/grub.cfg  

And I get this error:

Generating grub configuration file ...  Found linux image: /boot/vmlinuz-linux  Found initrd image: /boot/initramfs-linux.img  /usr/bin/grub-probe: error: unknown filesystem.  Found fallback initrd images(s) in /boot: initramfs-linux-fallback.img  done  

As you can see I am getting an unknown filesystem error, however when I run:

# grub-probe /  

I get

zfs  

So I see zfs when I run grub-probe but get unknown filesystem when I run grub-mkconfig.

Not sure what information you need to help me track this down... been googling and hacking on this for 2 days now, I would really appreciate some help on this one.

Change permissions for named volumes in Docker

Posted: 29 Nov 2021 09:33 AM PST

I have Docker container with named volume running on non-root user started with the following command:

docker run -v backup:/backup someimage  

In the image, there's a backup script which is trying to save files in /backup directory but it fails. Mounted backup volume in /backup dir belongs to root user.

How to change permissions for /backup directory?

-----EDIT1:

mcve below:

Run docker container with Gerrit:

docker run -v backupgerrit:/backup --name gerrit gerritcodereview/gerrit  

Now on other terminal window try to save something in /backup dir:

docker exec gerrit touch /backup/testfile  

You will get:

touch: cannot touch '/backup/testfile': Permission denied  

Automatic installation of updates on Windows Server 2019

Posted: 29 Nov 2021 08:13 AM PST

On a freshly-installed, non-domain-joined Windows Server 2019 (with desktop experience) VM, the ability to change Windows Update installation settings seems to have vanished, with the "Some settings are managed by your organization" message:

Windows Update settings showing settings disabled

Viewing the configured update policies shows two set on the device, both with a type of Group Policy:

  • Download the updates automatically and notify when they are ready to be installed
  • Set automatic update options

However, running rsop and gpresult both (as expected) show no group policy objects applied. (It's a standalone system, so no domain policy applies.)

Is this expected?

Amazon also acknowledge this for their 2019 EC2 images, but it seems odd that using gpedit.msc is the only mechanism for enabling automatic update installation.

Single session limit for sftp user

Posted: 29 Nov 2021 08:02 AM PST

I just want to set limit on sftp connection. Like if i set a session limit 1 so that user can make only 1 connection from that username. I don't want ip based or port based limit.

I have tried /etc/security/limits.conf with user hard maxlogins 1

It's always works only if that user is already active via ssh connection, if that user is not connected via ssh already than he is able to make multiple connections.

IPtables whitelist dynamic IP by hostname

Posted: 29 Nov 2021 08:45 AM PST

I want to limit access to a server to certain IPs using iptables but:

  • One of the IPs is dynamic, a normal ISP home connection which changes from time to time.
  • A subdomain e.g. dynamic.example.org is automatically updated when the IP changes using a similar service to dyndns.

Is it possible to have IPtables allow access to a port if dynamic.example.org resolves to that IP?

My current idea is to set up a systemd unit that periodically resolves dynamic.example.org and adjusts iptables accordingly. However, this also requires knowing the old IP address (so storing it somewhere) to remove it from the whitelist.

Is there a simpler way to do this already built in to iptables?

Google Cloud routing with VPCs peered in a partially connected mesh topology

Posted: 29 Nov 2021 07:54 AM PST

we are dividing our Google Cloud infrastructure into multiple projects, each with it's own VPC. We have one central VPC, let's call it vpcA, to which we connect via pritunl VPN and site-to-site tunnel from the outside.

We've also connected vpcA to multiple different other projects B with vpcB and C with vpcC using VPC peering. This works great in that everything can see the contents of vpcA, and vpcA can see the contents of vpcB and vpcC. Everything has unique 10.0.0.0 IPs. Each vpc has it's own unique CIDR range (eg. 10.96.0.0/16 for vpcA, 10.97.0.0/16 for vpcB, etc). All subnets are located in the same region.

Our problem now is that vpcB can not see anything in vpcC. The VPC peering only routes between the local vpc networks, and not the peered networks of that vpc (eg. from vpcB to vpcA only the 10.96.0.0/16 range is routed). There seems to be no way to modify this to route all other traffic as well.

While we can directly interconnect vpcB and vpcC using a separate peering, that quickly becomes complex as the number of vpcs increases. Also, and this is really the breaker right now, when we connect our on-premise infrastructure with vpcA using a Google Cloud VPN Gateway/Tunnel, it also only sees vpcA's content. Creating a direct VPN Tunnel to every single of our VPCs would create a lot of overhead, and a lot of additional cost (with 10 VPCs that would be $360/month without any traffic, just to peer).

Now, the question is, are we missing anything? Is there some way to create a partially connected mesh topology with VPCs on Google Cloud?

Thanks, Volker

You were not connected because a duplicate name exists on the network.

Posted: 29 Nov 2021 09:04 AM PST

I am getting this error daily on my web server that is trying to connect SQL Server.

"A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - You were not connected because a duplicate name exists on the network. If joining a domain, go to System in Control Panel to change the computer name and try again. If joining a workgroup, choose another workgro " Both Server are on Windows 2016. SQ I have already 1. checked the domain controller and found no duplicate entries 2. using different aliases for multiple IP on same SQL server 3. Checked all the server in the environmnet for any duplicate name and found nothing.

Can you please help me resolve this?

Self signed certificate is still trusted after revocation

Posted: 29 Nov 2021 07:00 AM PST

I have create Root CA and Server Certificate following didierstevens blog. My browsers still trusts the certificate even after revoking the server certificate. I was getting certificate revoked error message for my old CA and certificate. I followed same blog for creating new CA and cert but it is not working now.

I have hosted my test application in IIS 10.0.10586.0, my client browsers are Chrome 63.0.3239.132 and IE 11.1295.10586.0. I confirmed CRL file is accessible, certification revocation check is turned on in both the browsers. But still the CRL verification is not happening.

x11vnc on Ubuntu 16.04 Gnome with systemd

Posted: 29 Nov 2021 09:04 AM PST

I am having troubles to start x11vnc service on Ubuntu server 16.04 Gnome. It used to work just fine under 14.04. Not sure if related to x11vnc itself or the systemd.

Here is the systemd service file :

[Unit]  Description=Start x11vnc at startup.  After=multi-user.target    [Service]  Type=simple  ExecStart=/usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /etc/x11vnc.pass -rfbport 5900 -shared -o /var/log/x11vnc.log    [Install]  WantedBy=multi-user.target  

The /etc/x11vnc.pass is present and has been generated using x11vnc -storepasswd /etc/x11vnc.passwd

After reboot, x11vnc is started, but no luck to connect to it with vnc, and the x11vnc.log files says :

03/05/2017 16:12:19 passing arg to libvncserver: -rfbauth  03/05/2017 16:12:19 passing arg to libvncserver: /etc/x11vnc.pass  03/05/2017 16:12:19 passing arg to libvncserver: -rfbport  03/05/2017 16:12:19 passing arg to libvncserver: 5900  03/05/2017 16:12:19 x11vnc version: 0.9.13 lastmod: 2011-08-10  pid: 30259  xauth:  unable to generate an authority file name  03/05/2017 16:12:19 -auth guess: failed for display='unset'  03/05/2017 16:12:19 -auth guess: since we are root, retrying with FD_XDM=1  03/05/2017 16:12:19 -auth guess: failed for display='unset'  

To validate that x11vnc works fine, I simply manually run on the server :

x11vnc -rfbauth /etc/x11vnc.passwd  

and with that I can successfully connect with vnc. But how can I start it automatically ?

any reason (not) to delete expired ssl certificates on IIS>

Posted: 29 Nov 2021 06:41 AM PST

I'm getting ready to roll over some certificates on IIS 8 / Win Server 2012. I found a bunch of old expired certificates, not bound to any sites anymore. Is there any reason I should not remove these certs?

OpenStack error when converting a glance image to a cinder volume

Posted: 29 Nov 2021 08:02 AM PST

/var/log/cinder/volume.log shows the following error when converting a glance-registered QCOW2 image to a cinder volume.

local variable 'tmp' referenced before assignment failed to copy image to volume

SQL Server 2008 R2 Temporary Login Issue

Posted: 29 Nov 2021 07:00 AM PST

We have a mature SQL Server 2008 R2 server, being used from many C# web applications, each with connection pooling.

Last night, all web applications lost the ability to login to the database for 6 minutes, before the issue resolved itself. This was for a variety of logins.

I've had a look at the event log on the server, and found a lot of messages like:

The client was unable to reuse a session with SPID [Various], which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.  

I could not find a failed operation immediately before the error message. The failure ID of 29 apparently refers to RedoLoginException.

There were also plenty of these in the event log:

Login failed for user '[Various]'. Reason: Failed to open the database configured in the login object while revalidating the login on the connection.  

Also some time-outs:

A timeout (30000 milliseconds) was reached while waiting for a transaction response from the MSSQLSERVER service.  Timeout occurred while waiting for latch: class 'DBCC_MULTIOBJECT_SCANNER' id ..., type 4, Task ...: 44, waittime 300, flags 0x1a, owning task .... Continuing to wait.  Timeout occurred while waiting for latch: class 'ACCESS_METHODS_DATASET_PARENT', ...  

and:

IO Completion Listener (0x900) Worker ... appears to be non-yielding on Node 1. Approx CPU Used: kernel 0ms, user 0ms, Interval; 15334  

From the point of view of the client web servers, they received a number of login errors:

Logon failure: the user has not been granted the requested logon type at this computer  Logon Failure: The target account name is incorrect  Logon failure: unknown user name or bad password  

I wondered about thread pooling, and found that max worker threads is set to 0.

Any ideas?

UPDATE: This has now happened on three occasions.

open_basedir vs sessions

Posted: 29 Nov 2021 08:27 AM PST

On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus:

PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear)

So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly.

However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13).

Any ideas?

Can I tail the log on a Cisco Router?

Posted: 29 Nov 2021 09:17 AM PST

Can I tail the log on a Cisco Router? I have 'logging buffered 51200' and a debug running. I can see the packets with 'show log'. Can I tail this?

No comments:

Post a Comment