Tuesday, February 22, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Service account to login to GCP and deploy Compute Image to another GCP tenant

Posted: 22 Feb 2022 03:38 AM PST

we are trying to deploy a Compute image provided by the vendor in their GCP tenant using a service account which is in another GCP tenant. So, the Compute image is inside a Project in the vendor's tenant and we are trying to access the image using a service account which is in our tenant. The vendor has provided the required permissions to the Service account. Is there a way we can achieve this. I tried searching for relevant articles and found two below mentioned. But the SSO mentioned is within the same tenant.

Is there a way to configure OAuth 2.0 to the vendor's tenant so that the service account in our tenant can access the Compute image and deploy it. I am new to GCP, so please excuse if the details are not making sense. Please advice.

https://stackoverflow.com/questions/56008250/use-service-account-to-login-to-cloud-console-gui https://support.google.com/cloud/answer/6158849?hl=en#zippy=%2Cservice-accounts

Regards, SJ

Routing all SSH traffic through a proxy

Posted: 22 Feb 2022 03:16 AM PST

Currently we can route SSH traffic to a specific server through a proxy using ~/.ssh/config

Host 1.2.3.4  HostName 1.2.3.4  ProxyCommand nc -X 5 -x proxy:12345 %h %p  

But is there a way to route all IPs through a proxy?

Logs hidden by a mounted disk in the same directory

Posted: 22 Feb 2022 03:58 AM PST

We have (more than) a linux server on AWS that use a network file system, in our case AWS EFS, mounted on /logs directory.

It happens sometimes that:

  • the machine is rebooted
  • for any reason the network file system is not mounted, so all process starts to log in /logs but on the main (system) partition
  • disk space quickly starts to be low and, in worst case, the disk become completely full.

While debugging the issue a reboot happened and network file system comes back to normal so that my server can mount the disk and starts logging in the correct partition.

BUT the logs generated while network file system are still on the main partition, they are NOT accessible since a disk is mounted in the same /logs directory and they use disk space that cannot be reclaimed.

Is there any way apart unmount the network file system to access that logs so that I can trash or move them to the correct location avoiding to use precious space in system partition?

How can I run two docker containers in the same network namespace?

Posted: 22 Feb 2022 03:00 AM PST

I want to run two docker containers in the same Linux network namespace.
My goal is to route all my torrent traffic through OpenVPN.
This script successfully creates a openvpn client container.
I can successfully enter this namespace and verify my IP address is indeed the OpenVPN IP address.

My issue is - How do I run the qbittorent docker container inside the openvpn network namespace?

Is there some sort of flag when starting a docker container to specify the network namespace to run in?
Any other possible solutions?
It is my understanding that I can not change the network namespace of a an already running process
Thanks

UPDATE SOLUTION
add this
--net=container:$openvpn_client

openvpn_client="openvpn-client"  torrent_client="torrent_client"  dewinettorrent_ns="dewinettorrent_ns"    function getpid {          pid="$(docker inspect -f '{{.State.Pid}}' "$1")"          echo $pid  }    docker rm -f $openvpn_client  docker rm -f $torrent_client  ip netns delete $dewinettorrent    ip netns pids $dewinettorrent_ns | xargs -t kill -9    docker run -d  \    --privileged \    --name=$openvpn_client \    --volume /home/dewi/code/dot-files/vpn/:/data/vpn \    --volume /home/dewi/code/dewi_projects/ivacy_vpn_auth:/data/vpn/auth-user-pass \    docker-openvpn-client-dewi      docker run  -d \    --name=$torrent_client \    -e PUID=1000 \    -e PGID=1000 \    -e TZ=Europe/London \    -e WEBUI_PORT=8080 \    -p 9080:8080 \    -v /path/to/appdata/config:/config \    -v /path/to/downloads:/downloads \    lscr.io/linuxserver/qbittorrent    mkdir -p /var/run/netns;  ln -fs "/proc/$(getpid $openvpn_client)/ns/net" /var/run/netns/$dewinettorrent_ns    mkdir -p /etc/netns/$dewinettorrent_ns/  echo 'nameserver 8.8.8.8' > /etc/netns/$dewinettorrent_ns/resolv.conf    docker exec -i $openvpn_client bash /data/scripts/entry.sh &    ip netns exec $dewinettorrent_ns curl icanhazip.com #successfully returns back my VPN IP address  

Amazon Elastic File System costs

Posted: 22 Feb 2022 02:58 AM PST

Amazon's pricing (estimation) calculator for EFS (Elastic File System) asks to enter the average GB (or TB) of storage used per month.

This is directly from their page:

Enter the amount of EFS storage capacity you expect to use. EFS is an elastic file system that grows and shrinks based on actual usage and you pay only for what you use. We recommend you input your average usage for the month.  

I interpret this as what is there each month over a period of time, so I would like a confirmation on my estimation.

Let's say I store pictures, every month I produce 1 GB of pictures which I store in EFS. So the first month I have 1GB, the next month I have 2GB in storage and so forth.

The monthly average over a period of 12 months:

[1GB x (1+2+3+...+12)]/12 = 6.5GB/month  

Is this the "monthly average" that I have to enter in the price calculator? I do not see any other answer which will not hurt Bezos :)

Edit:

I should add that I could not find a price estimation example for an ever growing storage use (or even growing for sometime). That is why I am asking whether my reasoning is sound or not in this case.

Installing O365 via powershell inside an ISO

Posted: 22 Feb 2022 02:38 AM PST

I created an Office 365 installer where it does the local installation and dynamically changes the SourcePath, and I need to run it from an ISO file (I normally use USB's, but in VM's I use ISO)

Running locally in any directory or USB it works perfectly, but from an ISO it doesn't, the error appears:

Set-Content : Access to path 'C:\Users\Administrator\AppData\Local\Temp\tmpoffice\configuration.xml' was denied. No E:\SMS\PKG\CM10017B\InstallOffice_OfflineMode.ps1:24 character:164  + ... fficeMgmtCOM="TRUE" SourcePath="'+$PS1dirEOL) | Set-Content $tempconf  + ~~~~~~~~~~~~~~~~~~~~~       + CategoryInfo : NotSpecified: (:) [Set-Content], UnauthorizedAccessException       + FullyQualifiedErrorId : System.UnauthorizedAccessException,Microsoft.PowerShell.Commands.SetContentCommand  

How do I get this to work also within an ISO? I know an ISO is read-only, but I thought it was strange that he would try to modify something that is not in the ISO but in a temporary directory and he still can't.

$PS1dir = Get-Location    #Paths of the configuration  $tempdir = "$env:TEMP\tmpoffice"  $conf = "$($PS1dir)\configuration.xml"  $tempconf = "$env:TEMP\tmpoffice\configuration.xml"    #Current path with reformated end of XML line  $PS1dirEOL = "$($PS1dir)`" `AllowCdnFallback=`"TRUE`">"    #Copy configuration file for temp folder and set variable for same  Copy-Item $conf -Destination (New-Item -Path $tempdir -Type Directory -Force) -Recurse -Force    #Replace old line with the current folder  (Get-Content $tempconf) -replace '<Add OfficeClientEdition=.*', ('<Add OfficeClientEdition="64" Channel="Current" OfficeMgmtCOM="TRUE" SourcePath="'+$PS1dirEOL) | Set-Content $tempconf    #Running O365 installation from new configuration file  Start-Process cmd.exe -ArgumentList "/c start /MIN $($PS1dir)\setup.exe /configure $tempconf" -Wait    Remove-Item -Path $tempdir -Force -Recurse  

High network egress from AMERICAS to EMEA on GCP compute and AWS EC2

Posted: 22 Feb 2022 01:58 AM PST

Setup a 4 node Hadoop cluster (1 master, 3 workers) on both AWS and GCP. However experiencing, quite high Network egress for both platforms. AWS cluster apps: Hadoop, Yarn GCP cluster apps: Hadoop, Yarn, Hive

AWS resulted to a 244.027GB($21.96). This was 'pardoned' after explanation to AWS support. However, no info on the traffic to prevent future occurrence was provided. Hence, since there are no credits on AWS, had to put the cluster down.

GCP: same issue, but at least with credit limits.

Probably related: have received 'potential violation of service' due to DDOS attacks from both AWS and GCP. Recently, received it from GCP while setting up Kerberos on the cluster.

So far:

  1. Configure nodes to talk to each other using internal-ips (previously was external-ips).
  2. Firewall rules only for relevant ports.
  3. Close all UI browser tabs to apps (Hive, HDFS, Yarn) when not in use.
  4. Requested for AWS support for assistance on best practices and info on traffic. Received a lot of links on AWS material mostly on setting up billing alerts (not configuration or troubleshooting).
  5. GCP support very helpful. GCP billing is straightforward. Requested Tech support via chat - pending.

Any help on how to track where traffic is from.

How can I configure port forwarding in an HG8546M router from Huawei

Posted: 22 Feb 2022 01:54 AM PST

Has anyone done this successfully? I've read all the documents but keep getting an error that the external source end port is not valid. I don't know what the setting should be.

I want to ssh to a home server on port 2222

Unable to access kubernetes dashboard: "error trying to reach service: dial tcp 10.20.20.184:8443: connect: connection timed out"

Posted: 22 Feb 2022 01:45 AM PST

I have created an AWS EKS cluster.

I am able to get services list:

kubectl get svc  NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE  kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   24h  

I see the kubernetes-dashboard pod:

kubectl get pods -n kubernetes-dashboard  NAME                                         READY   STATUS    RESTARTS   AGE  dashboard-metrics-scraper-778b77d469-wvngl   1/1     Running   0          20h  kubernetes-dashboard-5cd89984f5-ljw56        1/1     Running   0          20h  

I start the kube proxy using: kubectl proxy

When I visit the dashboard URL http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login I get the response:

kind    "Status"  apiVersion  "v1"  metadata    {}  status  "Failure"  message "error trying to reach service: dial tcp 10.20.20.184:8443: connect: connection timed out"  reason  "ServiceUnavailable"  code    503  

I see many similar posts, but they have different contexts, I think. I am suspecting that there is some error in cluster configuration. But I am not sure.

What could be wrong with my cluster configuration? How do I go about fixing it?

wordpress nginx in docker lost css styles, js because embed file hostname in html not updated

Posted: 22 Feb 2022 02:13 AM PST

We are trying to install wordpress on nginx on docker. The domain pointing to it is ssl enabled and when accessing the website, html is loading well but css, js, images are all lost.

The reason is html still using wordpress hostname(which i think only work locally on docker containers) to embed css, js, image files from docker container that running the wordpress image.

here where I inspect: https://i.stack.imgur.com/N5YO6.png

my nginx config:

server {      listen 80;      server_name my_domain.com www.my_domain.com;        # Redirect http to https      location / {          return 301 https://my_domain.com$request_uri;      }  }    server {      listen 443 ssl http2;      ...      location / {          proxy_pass http://wordpress_host:80;      }        location ~ \.php$ {              try_files $uri =404;              fastcgi_split_path_info ^(.+\.php)(/.+)$;              proxy_pass http://wordpress_host:80;              fastcgi_index index.php;              include fastcgi_params;              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;              fastcgi_param PATH_INFO $fastcgi_path_info;      }        location ~ /\.ht {              deny all;      }        location = /favicon.ico {              log_not_found off; access_log off;      }      location = /robots.txt {              log_not_found off; access_log off; allow all;      }      location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {              expires max;              log_not_found off;      }  }  

How can I config nginx, wordpress to resolve this?

how to add new subnet mask for a private ip configuration in fortigate firewall

Posted: 22 Feb 2022 01:27 AM PST

I have been using fortigate firewall and the subnet mask is 255.255.224.0.

Now, our cloud provider are assigning new subnet mask for the servers. that is 255.255.192.0 .

So, How to add this without disturbing the existing environment?

Decission making about cloud usage - Hisatory of IT cloud disasters needed

Posted: 22 Feb 2022 04:00 AM PST

in my company the discussion about switching to MS Cloud (Office365) is amplifying.

I would need a list / statistics about Cloud outages / offline times / disasters that interrupted the business of it's users.

Did anybody knew a link for a comprehensive statistic about that?

Thanks a lot.

Get the potential chauffeur car service New York Now

Posted: 22 Feb 2022 01:01 AM PST

Hire the best and potential car service new york options from Northwest limousine service and enjoy the ride of LAX or JFK car service with it!

Apache: I cannot set full cache header for text/html using htaccess

Posted: 22 Feb 2022 12:58 AM PST

I had a similar problem where I couldn't set the full cache header for JS and CSS files in my htaccess file, and it turned out I couldn't because the cache expiry was being set on the server and I had to add AllowOverride all in the vhost container to get it working, but I still cannot set the full cache header for text/html on a page. I can set the max-age using mod_expires in my htaccess file, but if I try to set a cache header with this:

<FilesMatch "\.(html|htm|rtf|rtx|txt|xsd|xsl|xml|HTML|HTM|RTF|RTX|TXT|XSD|XSL|XML)$">      FileETag MTime Size      <IfModule mod_headers.c>          Header set Pragma "public"          Header set Cache-Control "no-cache, must-revalidate, public"      </IfModule>  </FilesMatch>  

The no-cache, must-revalidate, public doesn't show up. The only thing that shows in cache-control is the max-age. Does anyone know how to fix this so I can set the entire cache header in the htaccess for text/html?

Here is what I have after Bob's suggestion and it still doesn't work:

<FilesMatch ".+\.(html|htm|rtf|rtx|txt|xsd|xsl|xml|HTML|HTM|RTF|RTX|TXT|XSD|XSL|XML)$">      FileETag MTime Size      <IfModule mod_headers.c>          Header set Pragma "public"          Header set Cache-Control "no-cache, must-revalidate, public"      </IfModule>  </FilesMatch>  

I'm trying to set the cache header for the file that is called "/" in the DevTools > Network with initiator "document" and type "html".

apache cannot resolve hostname?

Posted: 22 Feb 2022 02:13 AM PST

I'm running a server with CentOS 8 and Apache 2.37 for hosting wordpress site. That website should replace an old one, with the same domain name. In /etc/hosts I put a hostname that I want in a form:

172.16.1.202 somesite.com

and I have changed only a few things in configuration files: /etc/httpd/conf/httpd.conf

#  ServerName somesite.com  #  ...  #  DocumentRoot "/var/www/html"  #  # Further relax access to the default document root:  <Directory "/var/www/html">  #   #      Options Indexes FollowSymLinks        #      # AllowOverride controls what directives may be placed in .htaccess files.      # It can be "All", "None", or any combination of the keywords:      #   Options FileInfo AuthConfig Limit      #      AllowOverride All        #      # Controls who can get stuff from this server.      #      Require all granted  </Directory>  

and added digital certificates: /etc/httpd/conf.d/ssl.conf

    <VirtualHost _default_:443>    # General setup for the virtual host, inherited from global configuration  #DocumentRoot "/var/www/html"  #ServerName www.example.com:443    # Use separate log files for the SSL virtual host; note that LogLevel  # is not inherited from httpd.conf.  ErrorLog logs/ssl_error_log  TransferLog logs/ssl_access_log  LogLevel warn    #   SSL Engine Switch:  #   Enable/Disable SSL for this virtual host.  SSLEngine on    #   List the protocol versions which clients are allowed to connect with.  #   The OpenSSL system profile is used by default.  See  #   update-crypto-policies(8) for more details.  #SSLProtocol all -SSLv3  #SSLProxyProtocol all -SSLv3    #   parallel.  #SSLCertificateFile /etc/pki/tls/certs/localhost.crt  SSLCertificateFile /etc/pki/tls/certs/somesite.com.crt  #   Server Private Key:  #   If the key is not combined with the certificate, use this  #   directive to point at the key file.  Keep in mind that if  #   you've both a RSA and a DSA private key you can configure  #   both in parallel (to also allow the use of DSA ciphers, etc.)  #   ECC keys, when in use, can also be configured in parallel  #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key  SSLCertificateKeyFile /etc/pki/tls/private/somesite.com.key  #   Server Certificate Chain:  #   Point SSLCertificateChainFile at a file containing the  #   concatenation of PEM encoded CA certificates which form the  #   certificate chain for the server certificate. Alternatively  #   the referenced file can be the same as SSLCertificateFile  #   when the CA certificates are directly appended to the server  #   certificate for convenience.  #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt  SSLCertificateChainFile /etc/pki/tls/certs/DigiCertCA.crt  ...  

I don't have virtual hosts, so I dont have /etc/httpd/conf.d/somesite.conf as I normally do. I put adresses and names in the dns servers (inside and outside) that we also host and when I type somesite.com I get

172.16.1.202 (and it shows no certificate, probably because it's ip address instead of somesite.com)

Apache configtest is ok and dns records are all ok.

What could be the issue? I have no idea what to try... Can somebody help me?

Thanks! Kind regards.

Edit: I tried adding a somesite.com.conf file with this configuration:

<VirtualHost *:443>          SSLEngine on          SSLCertificateFile /etc/pki/tls/certs/somesite_com.crt          SSLCertificateKeyFile /etc/pki/tls/private/somesite_com.key          SSLCertificateChainFile /etc/pki/tls/certs/DigiCertCA.crt          DocumentRoot /var/www/html          ServerName somesite.com          ServerAlias www.somesite.com  </VirtualHost>  <VirtualHost *:80>          ServerAdmin admin@somesite.com          ServerName somesite.com          ServerAlias www.somesite.com          DocumentRoot /var/www/html          ErrorLog /etc/httpd/logs/error_log          CustomLog /etc/httpd/logs/access_log combined  </VirtualHost>  

But it didn't change anything. :(

Create vdisk in PERC H710 Mini

Posted: 22 Feb 2022 02:39 AM PST

I have a refurbished Dell R720 server with the PERC H710 Mini raid controller. I can assemble vdisks from physical disk in the BIOS. However, I'd like to create a vdisk without rebooting. I've installed Debian 11 and added the srvadmin tools using these following instructions.

First, I've tried to manage the disks using the idracadm7 command. Some resources indicate that there should be a storage subcommand, but not according to idracadm7 help.

I've found the idracadm7 raid get xxx command with which I can query information about virtual or physical disks or the controller. But this command seems to be read-only. The same is true when I connect remotely with idracadm7 -r hostname -u user -p password and a custom openssl config to allow TLSv1.0 and TLSv1.1. Here I'm a bit confused because the built-in help refers to the tool as racadm instead of idracadm7. I'm using RACADM version 8.4.0.

Finally, I've discovered omconfig for which I had to set LD_LIBRARY_PATH to openssl 1.0.0 libraries. However, here, the built-in help, doesn't show any subcommands even though I'm logged in as root and the dataeng service is running.

root@r720# omconfig -?      omconfig         Configures component properties.        The available command(s) are:        Command          Description    Error! User has insufficient privileges to run command.    

For me it's difficult to tell,

  • which approach should work,
  • which approach fails due to missing drivers,
  • which approach fails because its a paid-subscription feature,
  • which approach fails because the software is outdated (and libraries are incompatible), and
  • which approach fails because the commands (like storage) describe a different version.

root login or sudo user for server administration?

Posted: 22 Feb 2022 02:18 AM PST

I'm trying to understand the technical arguments/security implications between ssh'ing with root directly, or making an auxiliary sudo user in the context of maintaining a server. To clarify, we're talking about servers owned by a single admin. For multiple people working on the machine, it's obvious that there is the audit trail benefit of having unique users for each actual person and fine-grained permissions.

My thought is, if this is a desktop station, it makes sense and is recommended to use a non-root user for daily stuff, but on a server, you usually login to maintain it and 99% of the times all your activities require root permissions.

So is there any security benefits in creating a "proxy" user that you're going to sudo to root anyways, instead of directly providing ssh access to root?

The only benefit I can think of is security through obscurity i.e. bots would normally try to probe for "root" user. But from how I see it, if a sudoers' user gets compromised, it's the same as compromising the root user, so game over.

In addition, most remote administration frameworks, NAS systems, hypervisors, encourage usage of a root user for web login.

How to set VMWare VM screen resolution on Windows using Ansible

Posted: 22 Feb 2022 03:52 AM PST

I'm trying to deploy a vSphere Windows VM via Ansible and need to set a specific screen resolution (1024x768). Running VMWareResolutionSet.exe works locally in PowerShell with the following command (the , needs to be escaped with a ` in Powershell to avoid making the arguments a list, and the & is needed to run commands with spaces in their paths):

& "C:\Program Files\VMWare\VMware Tools\VMwareResolutionSet.exe" 0 1 `, 0 0 1024 768  

However, running this command remotely with Ansible's win_command only yields a return code of 1 with no further error message. The same behavior occurs when running the command directly with pywinrm or when invoking PowerShell as a subshell. As far as I can tell, the problem lies with this not being an interactive PowerShell instance. However, setting become: true and become_method: runas did not work.

How can I set the VM screen resolution via Ansible?

How to mirror SQL Server databases into Salesforce automatically?

Posted: 22 Feb 2022 01:04 AM PST

I'd like to mirror some SQL Server databases in Salesforce. (Mirror = "keep the data in Salesforce in sync with what's in SQL Server without me having to do anything much".)

Does anyone know of a way to do this? I've seen some products out there that come close, but no cigar.

IIS 8.5 Error while performing operation on web.config

Posted: 22 Feb 2022 03:05 AM PST

I have a test server and a prod server that host a .netcore2.1 website. On the test server, I can access to the website and publish with MSDeploy without any problem.

On the prod server I can publish correctly from Visual Studio, everything is setting like on the test server but I have this when I try to open the website:

Http Error 500.19  Error Code    0x8007000d  Config File   \\?\C:\MySite\web.config  

I have the same configuration on both server, same version of program, web deploy 3.6 and URL rewrite installed.

The only visible difference is in the Services where the Web deployement agent Service is not listed on the prod server. However I had checked it in the Webdeploy install process.

Interesting thing: When I publish with the "Self-contained" option, the website is displayable so I suspect a missing thing but I don't find what.

I tried a lot of thing by searching on forum and microsoft documentation but nothing solve this problem.

Windows 10 app provisioned to non-existent user

Posted: 22 Feb 2022 03:44 AM PST

I created an image of Windows 10 1709 and now I want to run sysprep so I can upload to WDS. Sysprep fails:

Package Microsoft.BingNews_4.21.2212.0_x64__8wekyb3d8bbwe was installed for a user, but not provisioned for all users.

It shows it exists, but the user it references does not exist, thus I have no way to remove it from that user:

PackageFullName        : Microsoft.BingNews_4.21.2212.0_x64__8wekyb3d8bbwe  PackageUserInformation : {S-1-5-21-2431295864-3614308495-3179744271-1001                           [S-1-5-21-2431295864-3614308495-3179744271-1001]: Installed}  

The user does not have a profile and I even removed from the registry.

I tried to remove with remove-appxpackage but I'm told that the package was not found because the current user doesn't have it installed.

There is only the local admin account on the machine, and I did try installing Bing News and then trying sysprep, but the same error appeared.

How do I convince Windows of this?

HAProxy don't balancing requests between nodes of Galera cluster

Posted: 22 Feb 2022 02:00 AM PST

I stuck with the problem with balancing requests from app server to Galera cluster nodes.

The strukture of HA is

node1 10.62.10.35 (HAProxy + Keepalived) Master

node1 10.62.10.36 (HAProxy + Keepalived) Backup

node1 10.62.10.37 (HAProxy + Keepalived) Backup

Configuration of the Master Keepalived node1

global_defs { router_id PSQL1 } vrrp_script haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance 50 { virtual_router_id 50 advert_int 1 priority 101 state MASTER interface ens160 virtual_ipaddress { 10.62.10.254/22 dev ens160 } track_script { haproxy } }

Configuration of the Backup Keepalived node2

global_defs { router_id PSQL2 } vrrp_script haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance 50 { virtual_router_id 50 advert_int 1 priority 3 state BACKUP interface ens160 virtual_ipaddress { 10.62.10.254/22 dev ens160 } track_script { haproxy } }

Configuration of the Backup Keepalived node3 is the similar with the node2 except priority and router_id.

Configuration of the HAProxy is similar on each node

**` frontend galera

    listen 10.62.10.254:3306      mode tcp      default_backend galera  

frontend web

    bind *:8080      mode http      default_backend web  

backend galera

    balance roundrobin      option tcpka      option mysql-check user haproxy_check      server node1 10.62.10.35:3306 check weight 1      server node2 10.62.10.36:3306 check weight 1      server node3 10.62.10.37:3306 check weight 1  

backend web

     mode http       stats enable       stats uri /       stats realm Strictly\ Private       stats auth Admin:admin       stats auth Another_User:passwd  

Keepalived works. If Master node is down (or keepalived/haproxy is stoped) then next backup node use 10.62.10.254 address. But when Master is alive and I stop only MYSQL on it HAproxy don't send requests to other nodes. When I stop Master keepalived, the Backup node also use only it local MYSQL server for requests.

Any suggestions?

Thanks for your replies and have a nice day.

FreeBSD 10.3 SSSD AD integration issues

Posted: 22 Feb 2022 02:00 AM PST

I'm having a lot of issues with FreeBSD 10.3

I'm finding the binary packages are fairly useless. I've had to build nearly everything to make things "work". I like using the adcli tool to join to a domain (MUCH nicer than samba). But the binary version in pkg doesn't work. Building it from ports with all the obvious stuff enabled makes it work.

At this point, I have it to the point where I can successfully do a "getent", but no matter what I try, it won't auth my account. SSH, sudo, even running login directly, and it behaves as if I have a bad password.

I'm wondering if I need to use the heimdal krb package instead of MIT?

Here are my relevant configs:

krb5.conf:

[libdefaults]     default_realm = MYDOMAIN-SR.NET     forwardable = true  [realms]     MYDOMAIN-SR.NET = {        admin_server = ad.mydomain-sr.net        kdc = ad.mydomain-sr.net     }  [domain_realm]     mydomain.net = MYDOMAIN-SR.NET     .mydomain.net = MYDOMAIN-SR.NET     MYDOMAIN.net = MYDOMAIN-SR.NET     .MYDOMAIN.net = MYDOMAIN-SR.NET  

nsswitch.conf:

#  # nsswitch.conf(5) - name service switch configuration file  # $FreeBSD: releng/10.3/etc/nsswitch.conf 224765 2011-08-10 20:52:02Z dougb $  #  #group: compat  group: files sss  #group_compat: nis  hosts: files dns  networks: files  #passwd: compat  passwd: files sss  #passwd_compat: nis  shells: files  services: compat  services_compat: nis  protocols: files  rpc: files  

sssd.conf:

[sssd]  config_file_version = 2  #domains = mydomain-sr.net  domains = MYDOMAIN-SR.NET  services = nss, pam, pac  fallback_homedir = /home/%u  debug_level = 9    [pam]  pam_verbosity = 3    [domain/MYDOMAIN-SR.NET]  id_provider = ad  access_provider = ad  auth_provider = ad  chpass_provider = ad  ldap_id_mapping = False  #cache_credentials = true  cache_credentials = false  ad_server = ad.mydomain-sr.net  override_shell = /bin/tcsh  #ldap_sasl_canonicalize = false  #krb5_canonicalize = false  

nginx reverse stream proxy with multiple ports to the same server

Posted: 22 Feb 2022 01:49 AM PST

I'm trying to use nginx as a reverse proxy to two different servers. The servers require the use of client-side certificates for authentication, which means nginx is configured as a stream proxy leveraging the map $ssl_preread_server_name for SNI inspection to send to the correct server.

This works great for the pair of servers it's hosting now. Both listen on 443 but provide completely different services, but the redirection via SNI is working great.

The trouble is that one of the servers also uses port 9997 for communication (TLS) and we need to add more of these into the mix. Currently we're just hard-coding the traffic in nginx to the one server that uses 9997. This wont work as we move forward and have additional servers hosting content on 9997

How can I configure nginx to stream both 443 and 9997 to the box that needs those communications, while also continuing to send 443 to the other server when needed?

It needs to be dynamic so that the traffic is sent to the RIGHT server.

Here's the config that works now (some info redacted):

#user  nobody;  worker_processes  1;    error_log   /var/log/nginx/error.log;  #pid        logs/nginx.pid;      events {      worker_connections  1024;  }    stream {        map $ssl_preread_server_name $upstream {          server1.domain.com server1;          server2.domain.com server2;      }        server {          listen 443;          proxy_pass $upstream;            ssl_preread on;      }        server {          listen 9997;          proxy_pass 1.2.3.4:9997;      }          upstream server1 {          server 1.2.3.4:443;      }        upstream server2 {          server 1.2.3.5:443;      }    }  

docker volume permission denied issue for apache running in docker while apache creating files in docroot

Posted: 22 Feb 2022 03:02 AM PST

I have created one docker image having apache in it. While running that image into container apache root process is running as root and child processes are running as www-data. One docker volume (VOLUME defined in Dockerfile) gets created as /app/cache/example which is configured as docroot in apache.

Apache running in container actually rendering data from one of backend http endpoint and caching static assets in apache docroot.

But issue is apache is not able to write static assets into the docker volume permission denied issue is coming into the logs and hence all requests are going to backend http endpoint.

For resolving this issue, i have followed below approaches, but unfortunately no luck till now:

  1. changed the ownership of volume at both sides host and container with www-data. Both host and container having this www-data user with same info like username, uid, shell etc..

  2. chmod to 777 at both host and container side.

  3. Even followed the below one in Dockerfile:

    RUN useradd foo  RUN mkdir /data &amp;&amp; touch /data/x  RUN chown -R foo:foo /data  VOLUME /data  

Need help of experts to resolve this issue.

Sendmail's MTA, MDA and SMTP 550 User Unknown forwarded to postmaster

Posted: 22 Feb 2022 01:00 AM PST

I have two boxes running sendmail and configured as:

  1. MTA accepting connections from the outside world (mta.xyz.com)
  2. MDA accepting connections from internal network and storing them in user mailboxes (mda.xyz.com)

The MTA doesn't store any emails, it forwards everything to the MDA.

So now, let's say that there is an incoming email to a non-existent account bla@xyz.com:

  1. MTA accepts the recipient and opens an lmtp connection to MDA.
  2. MDA rejects the email with SMTP 550 User unknown error.
  3. MTA rejects the original email with SMTP 550 received from MDA.
  4. MTA closes the connection.
  5. MTA sends an email to postmaster to notify that MDA rejected an email because of error 550 User unknown error

The flow seems reasonable. The email is rejected with error 550 in the original connection and the sender (spammer) is correctly notified about the problem. What bothers me, though, is that MTA is sending each and every rejected email to postmaster, which accounts to a few dozens of unwanted emails a day. MDA doesn't send anything, it just rejects the email. And I am happy for both of them to log the rejections in logs, but how can I convince MTA to just ignore the 550 User unknown errors it receives from the MDA?

My initial thought was to accept only specific email addresses, but MTA in order to accept the emails has to be set up to use xyz.com as its local domain. This means any mailertables and access files are skipped (as far as I can tell).

So now I am thinking about adding to MTA some sendmail rules to accept emails to only specific recipients. I am hoping that if MTA rejects them first it won't bother to send anything to postmaster.

What would you do? Can anyone help with the sendmail rules?

The MTA config:

DOMAIN(generic)  LOCAL_DOMAIN(`xyz.com')  FEATURE(access_db, `hash -o -T<TMPF> /etc/mail/access')  dnl FEATURE(local_lmtp)  define(`confDOMAIN_NAME', `xyz.com')  (... some cert-related and other unrelated config ...)  define(`confNO_RCPT_ACTION', `add-to-undisclosed')  define(`confPRIVACY_FLAGS', `authwarnings,noexpn,novrfy')  define(`MAIL_HUB', `mda.xyz.com.')  define(`SMART_HOST', `mda.xyz.com.')  define(`confFORWARD_PATH', `')  dnl MAILER(local)  MAILER(smtp)  

MTA's access file:

Connect:[127.0.0.1]     OK  To:xyz.com   RELAY  

The MDA config:

DOMAIN(generic)  FEATURE(access_db, `hash -o -T<TMPF> /etc/mail/access')  FEATURE(blacklist_recipients)  FEATURE(`use_cw_file')  FEATURE(`smrsh')  dnl FEATURE(local_lmtp)  dnl FEATURE(mailertable, `hash -o /etc/mail/mailertable')  FEATURE(virtusertable, `hash -o /etc/mail/virtusertable')  FEATURE(`local_procmail_lmtp')  (... some cert-related and other unrelated configs ...)  define(`confNO_RCPT_ACTION', `add-to-undisclosed')  define(`confPRIVACY_FLAGS', `authwarnings,noexpn,novrfy')  MAILER(local)  MAILER(smtp)  MAILER(procmail)  

MDA's access file:

To:xyz.com   OK  From:192.168    OK  

MDA's virtualusertable:

# In 'aliases' those are redirected to procmail  user1@xyz.com       user1-xyz-com.virtual  user2@xyz.com       user2-xyz-com.virtual  

MDA's local-host-names:

xyz.com  somesubdomain.xyz.com  

How do I make html files accessible from OpenShift server running python?

Posted: 22 Feb 2022 01:00 AM PST

I have an OpenShift DIY app running Python. However, I cannot reach static files like html. (or run php) If I try accessing: mydomain.rhcloud.com/hello.html, I get: uWSGI Error Python application not found.

Could you please help how i can make html files accessible? My directories like:

repo     diy        something.py << It server all requests to the domain, however if it doesn't        hello.html / exists, than I get the above error  

Rsyslog configuration for changing source interface

Posted: 22 Feb 2022 04:02 AM PST

I'm working on rsyslog.conf upon CentOS 6.2.

Is there any configuration in rsyslog.conf to change the source interface (eg - eth0, eth1), so that the messages being sent to syslog server contains the same source IP address (which is obtained from the "source interface" mentioned)?

Amazon EC2 Linux AMI: What is the third column of yum list installed?

Posted: 22 Feb 2022 04:02 AM PST

Output of yum list installed, rightmost column:

Most of them say installed, some say @amzn-main, and some say @amzn-updates.

What is the meaning of this? It says tmux is @amzn-main, but I have been running it. So is it actually installed or not?

I'm trying to compile zsh 5.0.2, but its configure script is complaining about not finding ncurses. ncurses is listed as @amzn-updates. I have been looking around the system for its files without much luck, and sudo yum install ncurses gives me

Package ncurses-5.7-3.20090208.11.amzn1.x86_64 already installed and latest version  

IO-intensive processes hang with iowait, but no activity going on

Posted: 22 Feb 2022 03:02 AM PST

I have a bunch of IO-intensive jobs, and to boost performance, I just installed two SSDs in a compute server, one as a scratch file system, one as swap. After running for some time, all my processes hang in "D" state, consume no CPU, and the system reports 67% idle, and 33% wait. An iostat shows no disk activity going on, and the system is otherwise responsive, including the relevant file systems. Attaching a 'strace' to the processes produce no output.

Looking in /proc/(pid)/fd, I discover that all processes are using (reading) one common file. I can't see any reason why this should cause a problem, but I replaced the file, killed the processes, and let everything continue (i.e. new processes will be launced). We'll see if things get stuck on the new file, on a different file, or - ideally - not at all :-)

I also found a couple of these in kern.log:

BUG: unable to handle kernel paging request at ffffeb8800096e5c  

Lots of other information, but I don't know how to decipher it - except that it refers to the PID and name of one of my processes.

Any idea what is going on here, or how to fix it? This is on Ubuntu 12.04 LTS, Dell-something box with a RocketRaid disk controller and btrfs file system.

No comments:

Post a Comment