Wednesday, March 24, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


haproxy & complex redirects: am I missing anything?

Posted: 24 Mar 2021 10:56 PM PDT

I have a complex Apache config that mostly just does ProxyPassReverse, and that seems pretty silly so I've been converting it to haproxy. Mostly this has led to things being much, much simpler and clearer; yay. The exception is complex RewriteRule and RedirectMatch sorts of things. In particular, I have these:

RewriteRule ^(?:/l?mw)+/(.*\.php.*)$ https://mw-live.lojban.org/$1 [R=301,L]  RewriteRule ^(?:/l?mw)+/(.*)$ https://mw-live.lojban.org/papri/$1 [R=301,L]  

The intent here is to redirect requests to, for example, /lmw/mw/lmw/foo.php to /foo.php, and /mw/lmw/mw/MyPage with /papri/MyPage (yes, those URLs are weird; an old version of the site had some redirect problems).

As far as I can tell, the right way to do this is really not to do it in haproxy, and use the back-end web service itself. I'm asking about it in case I'm missing something, but it really seems like doing this sort of thing on the back-end web service works way better than doing it in haproxy.

I'm having 2 problems with trying to do this in haproxy:

  1. I can't find a way to get haproxy to record the redirects that it itself performs. I've told haproxy to show me the Location response header, but that only shows redirects from the back-end server:

     capture response header Location len 128  
  2. The ways of doing this in haproxy are ... clunky. I've found two ways that seem to work:

    # Old URL formats; trim leading "/mw" and "/lmw" when the target is a php file  http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/lmw,,g)] if { path_beg /lmw } { path_sub .php }  http-request redirect code 301 location http://%[hdr(host)]%[url,regsub(^/mw,,g)] if { path_beg /mw } { path_sub .php }  # Old URL formats; trim leading "/mw" and "/lmw"  http-request redirect code 301 location http://%[hdr(host)]/papri%[url,regsub(^/lmw,,g)] if { path_beg /lmw }  http-request redirect code 301 location http://%[hdr(host)]/papri%[url,regsub(^/mw,,g)] if { path_beg /mw }  http-request redirect code 301 location http://%[hdr(host)]/papri%[url,regsub(^/papri/lmw,,g)] if { path_beg /papri/lmw }  http-request redirect code 301 location http://%[hdr(host)]/papri%[url,regsub(^/papri/mw,,g)] if { path_beg /papri/mw }  

The problem with this method, besides it being many lines long, is that it produces a series of redirects, repeatedly redirecting each URL to one with one less /mw or whatever.

The other is having this on the front end:

use_backend mw-live-back-old-with-php if { hdr_beg(host) -i mw-live. } { path_reg ^/l?mw.*\.php }  use_backend mw-live-back-old-without-php if { hdr_beg(host) -i mw-live. } { path_reg ^/l?mw }  

and then these special backends to go with it:

backend mw-live-back-old-with-php      http-request replace-path ^(?:/l?mw)+/(.*\.php.*)$ /\1      http-request redirect prefix / code 301    backend mw-live-back-old-without-php      http-request replace-path ^(?:/l?mw)+/(.*) /papri/\1      http-request redirect prefix / code 301  

The problem with this method is that it's also super long, but also it seems silly to create backends just for this.

The thing you'd think would work, which I stole from https://fromanegg.com/post/2014/12/05/how-to-rewrite-and-redirect-with-haproxy/ , is having these lines as part of the generic backend.

http-request replace-path ^(?:/l?mw)+/(.*\.php.*)$ /\1 if { path_reg ^/l?mw.*\.php }  http-request redirect prefix / code 301 if { path_reg ^/l?mw.*\.php }  http-request replace-path ^(?:/l?mw)+/(.*)$ /papri/\1 if { path_reg ^/l?mw }  http-request redirect prefix / code 301 if { path_reg ^/l?mw }  

This fails because the redirect never fires, because by the time we get to the redirect line, the path_reg no longer matches.

So.

Am I missing something, or should I really just move this sort of complexity to the back-end web service?

How do I delete a GCP organization?

Posted: 24 Mar 2021 10:46 PM PDT

Need to delete a GCP organization and everything in it (projects, etc.) - nuke it completely. How can I do that?

wordpress nginx server not running after restart

Posted: 24 Mar 2021 11:00 PM PDT

can't able to start server after Restart of server. (wordpress nginx on vultr)

root@wreckeroo:~# sudo systemctl status nginx  ● nginx.service - nginx - high performance web server     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)     Active: failed (Result: exit-code) since Thu 2021-03-25 05:15:27 UTC; 8min ago       Docs: http://nginx.org/en/docs/    Process: 4549 ExecStop=/bin/sh -c /bin/kill -s TERM $(/bin/cat /var/run/nginx.pid) (code=exited, status=0/SUCCESS)    Process: 4557 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE)   Main PID: 1679 (code=exited, status=0/SUCCESS)    Mar 25 05:15:27 wreckeroo.com.au systemd[1]: Stopped nginx - high performance web server.  Mar 25 05:15:27 wreckeroo.com.au systemd[1]: Starting nginx - high performance web server...  Mar 25 05:15:27 wreckeroo.com.au nginx[4557]: nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (2: No such file or directory)  Mar 25 05:15:27 wreckeroo.com.au systemd[1]: nginx.service: Control process exited, code=exited status=1  Mar 25 05:15:27 wreckeroo.com.au systemd[1]: nginx.service: Failed with result 'exit-code'.  Mar 25 05:15:27 wreckeroo.com.au systemd[1]: Failed to start nginx - high performance web server.    
root@wreckeroo:~# sudo systemctl start nginx  Job for nginx.service failed because the control process exited with error code.  See "systemctl status nginx.service" and "journalctl -xe" for details.    
root@wreckeroo:~# journalctl -xe  Mar 25 05:29:23 wreckeroo.com.au sudo[5979]:     root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/systemctl start nginx  Mar 25 05:29:23 wreckeroo.com.au sudo[5979]: pam_unix(sudo:session): session opened for user root by root(uid=0)  Mar 25 05:29:23 wreckeroo.com.au systemd[1]: Starting nginx - high performance web server...  -- Subject: Unit nginx.service has begun start-up  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- Unit nginx.service has begun starting up.  Mar 25 05:29:23 wreckeroo.com.au nginx[5982]: nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (2: No such file or directory)  Mar 25 05:29:23 wreckeroo.com.au systemd[1]: nginx.service: Control process exited, code=exited status=1  Mar 25 05:29:23 wreckeroo.com.au sudo[5979]: pam_unix(sudo:session): session closed for user root  Mar 25 05:29:23 wreckeroo.com.au systemd[1]: nginx.service: Failed with result 'exit-code'.  Mar 25 05:29:23 wreckeroo.com.au systemd[1]: Failed to start nginx - high performance web server.  -- Subject: Unit nginx.service has failed  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- Unit nginx.service has failed.  --  -- The result is RESULT.  Mar 25 05:29:33 wreckeroo.com.au kernel: [UFW BLOCK] IN=ens3 OUT= MAC=56:00:03:43:54:33:fe:00:03:43:54:33:08:00 SRC=79.124.62.86 DST=139.180.183.126 LEN=40 TOS=0x00 PREC=0x00 TTL=245  Mar 25 05:29:50 wreckeroo.com.au kernel: [UFW BLOCK] IN=ens3 OUT= MAC=56:00:03:43:54:33:fe:00:03:43:54:33:08:00 SRC=79.124.62.86 DST=139.180.183.126 LEN=40 TOS=0x00 PREC=0x00 TTL=245  Mar 25 05:30:01 wreckeroo.com.au CRON[6096]: pam_unix(cron:session): session opened for user root by (uid=0)  Mar 25 05:30:01 wreckeroo.com.au CRON[6097]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)  Mar 25 05:30:01 wreckeroo.com.au CRON[6096]: pam_unix(cron:session): session closed for user root  Mar 25 05:30:13 wreckeroo.com.au kernel: [UFW BLOCK] IN=ens3 OUT= MAC=56:00:03:43:54:33:fe:00:03:43:54:33:08:00 SRC=79.124.62.86 DST=139.180.183.126 LEN=40 TOS=0x00 PREC=0x00 TTL=242  Mar 25 05:30:18 wreckeroo.com.au kernel: [UFW BLOCK] IN=ens3 OUT= MAC=56:00:03:43:54:33:fe:00:03:43:54:33:08:00 SRC=49.88.112.114 DST=139.180.183.126 LEN=908 TOS=0x00 PREC=0x00 TTL=4  lines 1378-1404/1404 (END)    

how can I find this issue

Rails application not working on AWS after upgrade to Rails 6.1.3

Posted: 24 Mar 2021 09:59 PM PDT

I recently upgraded my application to Rails 6.1.3. When I attempted to deploy the application to AWS the deployment failed with the error message "Following services are not running: application.". I've tried to run the app both locally and by sshing into the AWS instance so I'm not sure why Amazon's auto-deployment script is failing to launch it.

After the application fails to deploy the CPU usage also stays at 100%.

My Puma file is

workers Integer(ENV['WEB_CONCURRENCY'] || 2)  threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)  threads threads_count, threads_count    preload_app!    rackup      DefaultRackup  port        ENV['PORT']     || 3000  environment ENV['RACK_ENV'] || 'development'    on_worker_boot do    # Worker specific setup for Rails 4.1+    # See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot    ActiveRecord::Base.establish_connection  end  

This is my eb-activity.log

  perl-Sub-Install.noarch 0:0.926-6.8.amzn1    perl-TimeDate.noarch 1:2.30-2.7.amzn1    perl-Try-Tiny.noarch 0:0.12-2.5.amzn1    perl-URI.noarch 0:1.60-9.8.amzn1    perl-WWW-RobotRules.noarch 0:6.02-5.12.amzn1    perl-libwww-perl.noarch 0:6.05-2.17.amzn1        Complete!    Installing cloud watch tools    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current    Dload  Upload   Total   Spent    Left  Speed        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0    100 24225  100 24225    0     0   271k      0 --:--:-- --:--:-- --:--:--  271k    Archive:  CloudWatchMonitoringScripts-1.2.2.zip    extracting: aws-scripts-mon/awscreds.template    inflating: aws-scripts-mon/AwsSignatureV4.pm    inflating: aws-scripts-mon/CloudWatchClient.pm    inflating: aws-scripts-mon/LICENSE.txt    inflating: aws-scripts-mon/mon-get-instance-stats.pl    inflating: aws-scripts-mon/mon-put-instance-data.pl    inflating: aws-scripts-mon/NOTICE.txt    Setting shell script permissions    Installing cron script      [2021-03-25T04:56:06.367Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_Cosmic_Delivery_Production_Website] : Completed activity.  [2021-03-25T04:56:06.367Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild] : Completed activity.  [2021-03-25T04:56:06.403Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0/EbExtensionPostBuild] : Completed activity.  [2021-03-25T04:56:06.403Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0/InfraCleanEbextension] : Starting activity...  [2021-03-25T04:56:06.445Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0/InfraCleanEbextension] : Completed activity. Result:    Cleaned ebextensions subdirectories from /var/app/ondeck.  [2021-03-25T04:56:06.445Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage0] : Completed activity. Result:    Application deployment - Command CMD-SelfStartup stage 0 completed  [2021-03-25T04:56:06.445Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1] : Starting activity...  [2021-03-25T04:56:06.445Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook] : Starting activity...  [2021-03-25T04:56:06.450Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/01_flip.sh] : Starting activity...  [2021-03-25T04:56:06.793Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/01_flip.sh] : Completed activity. Result:    ++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir    + EB_APP_STAGING_DIR=/var/app/ondeck    ++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir    + EB_APP_DEPLOY_DIR=/var/app/current    ++ /opt/elasticbeanstalk/bin/get-config container -k app_user    + EB_APP_USER=webapp    + '[' -d /var/app/current ']'    + mv /var/app/current /var/app/current.old    + mv /var/app/ondeck /var/app/current    + mkdir -p /var/app/current/tmp /var/app/current/public    + chown -R webapp:webapp /var/app/current/tmp /var/app/current/public    + nohup rm -rf /var/app/current.old  [2021-03-25T04:56:06.793Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/01stop_xray.sh] : Starting activity...  [2021-03-25T04:56:08.916Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/01stop_xray.sh] : Completed activity. Result:    Executing: if ( initctl status xray | grep start ); then initctl stop xray; fi    xray start/running, process 2278    xray stop/waiting  [2021-03-25T04:56:08.916Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/02_restart_app_server.sh] : Starting activity...  [2021-03-25T04:56:08.922Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/02_restart_app_server.sh] : Completed activity. Result:    + initctl restart puma    initctl: Unknown instance:     + initctl start puma    puma start/running, process 8214  [2021-03-25T04:56:08.922Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Starting activity...  [2021-03-25T04:56:09.155Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/02start_xray.sh] : Completed activity.  [2021-03-25T04:56:09.155Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/28_create_pids.sh] : Starting activity...  [2021-03-25T04:56:39.577Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook/28_create_pids.sh] : Completed activity. Result:    + /opt/elasticbeanstalk/bin/healthd-track-pidfile --proxy nginx    ++ /opt/elasticbeanstalk/bin/get-config container -k puma_pid_dir    + PUMA_PID_DIR=/var/run/puma    + /opt/elasticbeanstalk/bin/healthd-track-pidfile --name application --location /var/run/puma/puma.pid  [2021-03-25T04:56:39.578Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployEnactHook] : Completed activity. Result:    Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.  [2021-03-25T04:56:39.578Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployPostHook] : Starting activity...  [2021-03-25T04:56:39.578Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployPostHook/01_rails_support.sh] : Starting activity...  [2021-03-25T04:56:39.991Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployPostHook/01_rails_support.sh] : Completed activity. Result:    ++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir    + EB_APP_DEPLOY_DIR=/var/app/current    ++ /opt/elasticbeanstalk/bin/get-config container -k app_log_dir    + EB_APP_LOG_DIR=/var/app/containerfiles/logs    + ln -sf /var/app/current/log/delayed_job.log /var/app/current/log/development_inner.log /var/app/current/log/development.log /var/app/current/log/production.log /var/app/current/log/test.log /var/app/current/log/worker_inner.log /var/app/containerfiles/logs  [2021-03-25T04:56:39.991Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/AppDeployPostHook] : Completed activity. Result:    Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.  [2021-03-25T04:56:39.991Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/PostInitHook] : Starting activity...  [2021-03-25T04:56:39.992Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1/PostInitHook] : Completed activity. Result:    Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/postinit.  [2021-03-25T04:56:39.992Z] INFO  [3111]  - [Application deployment Version1.32.20@9/StartupStage1] : Completed activity. Result:    Application deployment - Command CMD-SelfStartup stage 1 completed  [2021-03-25T04:56:39.992Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter] : Starting activity...  [2021-03-25T04:56:39.992Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter/ConfigLogRotation] : Starting activity...  [2021-03-25T04:56:39.992Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...  [2021-03-25T04:56:40.170Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:    Disabled forced hourly log rotation.  [2021-03-25T04:56:40.170Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter/ConfigLogRotation] : Completed activity. Result:    Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.  [2021-03-25T04:56:40.170Z] INFO  [3111]  - [Application deployment Version1.32.20@9/AddonsAfter] : Completed activity.  [2021-03-25T04:56:40.170Z] INFO  [3111]  - [Application deployment Version1.32.20@9] : Completed activity. Result:    Application deployment - Command CMD-SelfStartup succeeded  [2021-03-25T04:57:31.185Z] INFO  [10744] - [CMD-TailLogs] : Starting activity...  [2021-03-25T04:57:31.185Z] INFO  [10744] - [CMD-TailLogs/AddonsBefore] : Starting activity...  [2021-03-25T04:57:31.185Z] INFO  [10744] - [CMD-TailLogs/AddonsBefore] : Completed activity.  [2021-03-25T04:57:31.186Z] INFO  [10744] - [CMD-TailLogs/TailLogs] : Starting activity...  [2021-03-25T04:57:31.186Z] INFO  [10744] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...  

And this is a puma error I'm getting in another log

2021/03/25 04:24:05 [crit] 3116#0: *1 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.31.115, server: _, request: "GET /status/healthcheck HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/status/healthcheck", host: "172.31.33.122"  

I've been working on this problem for a bit and am stumped as to what might be causing the issue. I was wondering if someone could point me in the right direction.

Conflict with Debian version

Posted: 24 Mar 2021 09:15 PM PDT

On a server I set up a few years ago, typing cat /etc/lsb-release gives me the following result:

DISTRIB_RELEASE=7  DISTRIB_CODENAME=  DISTRIB_DESCRIPTION=  

And lsb_release -a displays:

Distributor ID: Debian  Description:    Debian GNU/Linux 9.13 (stretch)  Release:    7  Codename:   stretch  

It seems like the system is not very sure whether it is a Debian 9 (Stretch) or Debian 7 (Wheezy). Some packages that should be available with Stretch cannot be found with apt-cache search.

What can I do to fix this?

Postfix, Edited sasl_passwd and now the relay fails authentication

Posted: 24 Mar 2021 08:52 PM PDT

So I'm running Postfix 3.1.0 on a work server. It's relaying emails to AWS SES and I just updated the access key. I removed the previous value in /etc/postfix/sasl_passwd and then ran postmap hash:/etc/postfix/sasl_passwd and email was failing authentication. Then I systemctl restart postfix and still no auth success. The only information I can find is regarding the postmap command and that should JUST WORK.

Like I said I logged onto a working system.. created a new access key in IAM. Put that new into into /etc/postfix/sasl_passwd and thats the only change I've made. What did I do wrong?

Getting libssl abd libcrypto conflict warning while compiling php on RHEL 7.8

Posted: 24 Mar 2021 07:26 PM PDT

I m getting following warning messages while compiling php on RHEL7.8 I am able to successfully compile and install php but I am not sure what will be the side effect of these warnings. Is there any way to resolve these warning?

/usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libssl.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libssl.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1  /usr/bin/ld: warning: libcrypto.so.10, needed by //usr/lib64/libssh2.so.1, may conflict with libcrypto.so.1.1      #OpenSSL Installation    ./config --prefix=/usr/local/ssl shared  make  make test  make install      #Apache Installation    ./configure \  --prefix=/usr/local/apache2 \  --with-ssl=/usr/local/ssl \  --with-included-apr \  --with-mpm=prefork \  --enable-ssl \  --enable-modules=all \  --enable-mods-shared=most \  make  make install    #PHP Installation    './configure' \  '--prefix=/usr/local/php7' \  '--with-apxs2=/usr/local/apache2/bin/apxs' \  '--with-config-file-path=/usr/local/php7/conf' \  '--with-curl' \  '--with-kerberos' \  '--with-openssl=/usr/local/ssl' \  '--with-openssl-dir=/usr/local/ssl' \  '--with-zlib' \  '--with-zlib-dir=/lib64/' \  '--enable-bcmath' \  '--enable-ftp' \  '--enable-gd-native-ttf' \  '--enable-mbstring' \  '--enable-opcache' \  '--enable-pcntl' \  '--enable-pdo' \  '--enable-shared' \  '--enable-shmop' \  '--enable-soap' \  '--enable-sockets' \  '--enable-sysvshm' \  '--enable-xml' \  '--enable-zip' \  '--without-libzip' \  

ldd /usr/local/ssl/bin/openssl

linux-vdso.so.1 =>  (0x00007fff46493000)  libssl.so.1.1 => /usr/local/ssl/lib/libssl.so.1.1 (0x00007fc710c31000)  libcrypto.so.1.1 => /usr/local/ssl/lib/libcrypto.so.1.1 (0x00007fc710746000)  libdl.so.2 => /lib64/libdl.so.2 (0x00007fc710542000)  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fc710326000)  libc.so.6 => /lib64/libc.so.6 (0x00007fc70ff58000)  /lib64/ld-linux-x86-64.so.2 (0x00007fc710ec3000)  

ldd /usr/local/apache2/bin/httpd

linux-vdso.so.1 =>  (0x00007ffcea29e000)  libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fcb03f33000)  libaprutil-1.so.0 => /usr/local/apache2/lib/libaprutil-1.so.0 (0x00007fcb03d09000)  libexpat.so.1 => /lib64/libexpat.so.1 (0x00007fcb03adf000)  libapr-1.so.0 => /usr/local/apache2/lib/libapr-1.so.0 (0x00007fcb038a4000)  libuuid.so.1 => /lib64/libuuid.so.1 (0x00007fcb0369f000)  librt.so.1 => /lib64/librt.so.1 (0x00007fcb03497000)  libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007fcb03260000)  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fcb03044000)  libdl.so.2 => /lib64/libdl.so.2 (0x00007fcb02e40000)  libc.so.6 => /lib64/libc.so.6 (0x00007fcb02a72000)  /lib64/ld-linux-x86-64.so.2 (0x00007fcb04195000)  libfreebl3.so => /lib64/libfreebl3.so (0x00007fcb0286f000)  

ldd /usr/local/apache2/modules/mod_ssl.so

linux-vdso.so.1 =>  (0x00007ffc2019d000)  libssl.so.1.1 => /usr/local/ssl/lib/libssl.so.1.1 (0x00007fb63e115000)  libcrypto.so.1.1 => /usr/local/ssl/lib/libcrypto.so.1.1 (0x00007fb63dc2a000)  libuuid.so.1 => /lib64/libuuid.so.1 (0x00007fb63da25000)  librt.so.1 => /lib64/librt.so.1 (0x00007fb63d81d000)  libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007fb63d5e6000)  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fb63d3ca000)  libdl.so.2 => /lib64/libdl.so.2 (0x00007fb63d1c6000)  libc.so.6 => /lib64/libc.so.6 (0x00007fb63cdf8000)  /lib64/ld-linux-x86-64.so.2 (0x00007fb63e5e4000)  libfreebl3.so => /lib64/libfreebl3.so (0x00007fb63cbf5000)    # ldd /usr/local/php7/bin/php    /lib64/ld-linux-x86-64.so.2 (0x00007ffadb8d3000)          libbz2.so.1 => /lib64/libbz2.so.1 (0x00007ffad4ed8000)          libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007ffad7d23000)          libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007ffad45dc000)          libcrypto.so.1.1 => /usr/local/ssl/lib/libcrypto.so.1.1 (0x00007ffad91a8000)          libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ffadb69c000)          libc.so.6 => /lib64/libc.so.6 (0x00007ffad742c000)          libcurl.so.4 => /lib64/libcurl.so.4 (0x00007ffad7ab9000)          libdl.so.2 => /lib64/libdl.so.2 (0x00007ffada34d000)          libfreebl3.so => /lib64/libfreebl3.so (0x00007ffad7229000)          libfreetype.so.6 => /lib64/libfreetype.so.6 (0x00007ffad77fa000)          libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007ffad8443000)          libidn.so.11 => /lib64/libidn.so.11 (0x00007ffad67a0000)          libifasf.so => /home/informix/lib/libifasf.so (0x00007ffadac28000)          libifcli.so => /home/informix/lib/cli/libifcli.so (0x00007ffadb2e3000)          libifdmr.so => /home/informix/lib/cli/libifdmr.so (0x00007ffadb0db000)          libifgen.so => /home/informix/lib/esql/libifgen.so (0x00007ffada9c6000)          libifgls.so => /home/informix/lib/esql/libifgls.so (0x00007ffada551000)          libifglx.so => /home/informix/lib/esql/libifglx.so (0x00007ffada14b000)          libifos.so => /home/informix/lib/esql/libifos.so (0x00007ffada7a4000)          libifsql.so => /home/informix/lib/esql/libifsql.so (0x00007ffadae87000)          libjpeg.so.62 => /lib64/libjpeg.so.62 (0x00007ffad9693000)          libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007ffad7f27000)          libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007ffad69d3000)          libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007ffad815a000)          libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007ffad6bd7000)          liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x00007ffad533d000)          libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x00007ffad50e8000)          liblzma.so.5 => /lib64/liblzma.so.5 (0x00007ffad6de7000)          libm.so.6 => /lib64/libm.so.6 (0x00007ffad8c14000)          libnsl.so.1 => /lib64/libnsl.so.1 (0x00007ffad89fa000)          libnspr4.so => /lib64/libnspr4.so (0x00007ffad554c000)          libnss3.so => /lib64/libnss3.so (0x00007ffad5dc3000)          libnssutil3.so => /lib64/libnssutil3.so (0x00007ffad5b93000)          libpcre.so.1 => /lib64/libpcre.so.1 (0x00007ffad415d000)          libplc4.so => /lib64/libplc4.so (0x00007ffad578a000)          libplds4.so => /lib64/libplds4.so (0x00007ffad598f000)      libpng15.so.15 => /lib64/libpng15.so.15 (0x00007ffad98e8000)          libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ffad700d000)          libresolv.so.2 => /lib64/libresolv.so.2 (0x00007ffad9d1b000)          librt.so.1 => /lib64/librt.so.1 (0x00007ffad9b13000)          libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007ffad43bf000)          libselinux.so.1 => /lib64/libselinux.so.1 (0x00007ffad4cb1000)          libsmime3.so => /lib64/libsmime3.so (0x00007ffad60f2000)          libssh2.so.1 => /lib64/libssh2.so.1 (0x00007ffad6573000)          libssl3.so => /lib64/libssl3.so (0x00007ffad631a000)          libssl.so.10 => /lib64/libssl.so.10 (0x00007ffad4a3f000)          libssl.so.1.1 => /usr/local/ssl/lib/libssl.so.1.1 (0x00007ffad8f16000)          libxml2.so.2 => /lib64/libxml2.so.2 (0x00007ffad8690000)          libz.so.1 => /lib64/libz.so.1 (0x00007ffad9f35000)          linux-vdso.so.1 =>  (0x00007fffe9bb3000)  

Docker daemon ignores daemon.json on boot

Posted: 24 Mar 2021 07:14 PM PDT

My Docker Daemon seems to ignore /etc/docker/daemon.json on boot.

Similar to this question, I'm having some troubles telling the Docker daemon that it should not use the default 172.17.* range. That range is already claimed by our VPN and prevents people connected through that VPN from making a connection to the server Docker runs on.

The hugely annoying thing is that every time I reboot my server, Docker claims an IP from the VPN's range again, regardless of what I put in /etc/docker/daemon.json. I have to manually issue

# systemctl restart docker  

directly after boot before people on the 172.17.* network can reach the server again.

This obviously gets forgotten quite often and leads to many problem tickets.

My /etc/docker/daemon.json looks like this:

{   "default-address-pools": [     {        "base": "172.20.0.0/16",        "size": 24     }   ]  }  

and is permissioned like so:

-rw-r--r--   1 root root   123 Dec  8 10:43 daemon.json  

I have no idea how to even start diagnosing this problem; any ideas?

For completeness:

  • Ubuntu 18.04.5 LTS
  • Docker version 19.03.6, build 369ce74a3c

Istio Multi-master Multi-network Locality Failover Woes

Posted: 24 Mar 2021 07:12 PM PDT

I can't get "multi-primary multi-network" to play nice with locality failover (or locality load balancing for that matter). The endpoints are registered fine. The istio-system is labeled with network information, and each node is labeled with zone and region information and when I check the /clusters page on the client's envoy admin interface, the zone and region information is set correctly for each endpoint.

The issue seems to be that the control plane isn't assigning priority to the endpoints. However, to a stale source, this should work automatically, provided that I've created a DestinationRule (which I have). I've also crated a VirtualService for good measure.

$ istioctl proxy-config endpoints -n client client-6889f68cbc-z5jb6 --cluster "outbound|80||server.server.svc.cluster.local" -o json | jq '.[0].hostStatuses[] | del(.stats)'  {    "address": {      "socketAddress": {        "address": "10.244.1.25",        "portValue": 80      }    },    "healthStatus": {      "edsHealthStatus": "HEALTHY"    },    "weight": 1,    "locality": {      "region": "region2",      "zone": "zone2"    }  }  {    "address": {      "socketAddress": {        "address": "172.18.254.1",        "portValue": 15443      }    },    "healthStatus": {      "edsHealthStatus": "HEALTHY"    },    "weight": 3,    "locality": {      "region": "region1",      "zone": "zone1"    }  }  

My setup is two 1.20.2 clusters running locally using KinD + metallb, with Istio operator v1.9.1. Each cluster is configured to occupy a different region & zone.

Istio VS and DR

apiVersion: networking.istio.io/v1beta1  kind: DestinationRule  metadata:    name: server    namespace: server  spec:    host: server    trafficPolicy:      connectionPool:        http:          http2MaxRequests: 10          maxRequestsPerConnection: 10      loadBalancer:        localityLbSetting:          enabled: true        simple: ROUND_ROBIN      outlierDetection:        baseEjectionTime: 1m        consecutive5xxErrors: 1        interval: 1s        maxEjectionPercent: 51        minHealthPercent: 0  
apiVersion: networking.istio.io/v1beta1  kind: VirtualService  metadata:    name: server    namespace: server  spec:    hosts:    - server    http:    - route:      - destination:          host: server  

Kiali View

kiali view

As you can see from the Kiali dashboard, the DR and VS are both active. Both clusters are routable. But traffic is flowing to both equally, where it ought to be flowing only to one. I've also tried specifying distribute and failover explicitly in my DR spec with no success.

Is this a normal number of database connections?

Posted: 24 Mar 2021 05:57 PM PDT

Check out this graph of my database connections. I am running a Postgres database on Lightsail, and I am connecting to it from a single Ubuntu instance running a single instance of Express and Sequelize. While I'm very comfortable with all of this from a coding perspective, I don't know shit about this from a devops perspective, so I am a little confused by the number of database connections.

You can see that in the past it has spiked to nearly 400. I haven't experienced any performance issues, and when I look at the CPU utilization over the same period, there is nothing concerning at all. But still I am wondering if the number of connections is normal. From my very dumb understanding of the issue, I am using Sequelize as an ORM, and I am using the default connection pool (5), so it seems that there would only ever be up to 5 simultaneous connections.

Database connections over one week

How to deliver certificate randomly to browser, according to the backend IP/server picked by HAProxy?

Posted: 24 Mar 2021 10:29 PM PDT

I am a beginner in HAProxy and I was trying to achieve this. I have 4 VMs, one having HAProxy server and Apache httpd server in other 3 VMs. I have test.html on all three servers. When user hit https://haproxy_ip/test.html, the file may be delivered from any server.

I have generated separate SSL certificate in each VM (by referring these urls How to enable https on Apache CentOS - TechRepublic and https://www.suse.com/support/kb/doc/?id=000018152) and copied the pem and key files to HAProxy VM. Now, all three pem files are available under /etc/haproxy/ directory.

I have configured ssl crt-list to pick corresponding SSL certificate by HAProxy, and below is how crt-list.txt looks like;

/etc/haproxy/testserver1.pem testserver1  /etc/haproxy/testserver2.pem testserver2  /etc/haproxy/testserver3.pem testserver3  

What I am looking for is, when user request https://haproxy_ip/test.html in browser, the certificate that need to be delivered each time should be based on the backend server picked by HAProxy.

Is this possible / supported by HAProxy? If yes, can somebody please help me?

Below is my current configuration;

global      maxconn 50000      log /dev/log local0      log /dev/log local1 notice      user root      group root      stats timeout 30s      nbproc 2      cpu-map auto:1/1-4 0-3      ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256      ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets      daemon    defaults      timeout connect 5000ms      timeout client 50000ms      timeout server 50000ms    frontend ft_http      bind :80      mode http      default_backend bk_http    frontend ft_https      bind :443 ssl crt-list /etc/haproxy/crt-list.txt      mode tcp      default_backend bk_https    backend bk_http      mode http      balance roundrobin      default-server inter 1s      server testserver1 192.168.0.1:80 check      server testserver2 192.168.0.2:80 check      server testserver3 192.168.0.3:80 check    backend bk_https      mode tcp      balance roundrobin      stick-table type ip size 200k expire 1m      default-server inter 1s      server testserver1 192.168.0.1:443 check      server testserver2 192.168.0.2:443 check      server testserver3 192.168.0.3:443 check  

Thanks.

EDIT

Let me explain the scenario why I am trying to achieve this;

Say, I have two customers (two different domains) and they put DNS entries CNAME, so that when user enter https://myapp.customer1.com or https://myapp.customer2.com in browser, it redirects to my server, where I have HAProxy in place. Also, lets say the customer is not storing / not willing to store subdomain certificate in his server due to some reason. In that case, I need to store and maintain those certificates in my server. Since both customers use different server, I cant use wildcard certificates. Also, lets say I don't prefer SANS either.

In this scenario, how can I deliver corresponding certificate (according to domain user requested) from my server using HAProxy? Hope you understand what I am trying to achieve.

How to debug: ssh_exchange_identification: Connection closed by remote host

Posted: 24 Mar 2021 10:28 PM PDT

SSH by private IP is fine

I'm able to connect to a server through SSH by its private IP address:

C:\Users\m3>ssh -vvvvA uconn@192.168.1.11  OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5  debug3: Failed to open file:C:/Users/m3/.ssh/config error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_config error:2  debug2: resolve_canonicalize: hostname 192.168.1.11 is address  debug2: ssh_connect_direct: needpriv 0  debug1: Connecting to 192.168.1.11 [192.168.1.11] port 22.  debug1: Connection established.  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_rsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_rsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_dsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_dsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ecdsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ecdsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519 error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ed25519 type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ed25519-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_xmss type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_xmss-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_for_Windows_7.7  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.10  debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.10 pat OpenSSH* compat 0x04000000  debug2: fd 3 setting O_NONBLOCK  debug1: Authenticating to 192.168.1.11:22 as 'uconn'  debug3: hostkeys_foreach: reading file "C:\\Users\\m3/.ssh/known_hosts"  debug3: record_hostkey: found key type ECDSA in file C:\\Users\\m3/.ssh/known_hosts:1  debug3: load_hostkeys: loaded 1 keys from 192.168.1.11  debug3: Failed to open file:C:/Users/m3/.ssh/known_hosts2 error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_known_hosts error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_known_hosts2 error:2  debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521  debug3: send packet: type 20  debug1: SSH2_MSG_KEXINIT sent  debug3: receive packet: type 20  debug1: SSH2_MSG_KEXINIT received  debug2: local client KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c  debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none  debug2: compression stoc: none  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug2: peer server KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1  debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com  debug2: compression stoc: none,zlib@openssh.com  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug1: kex: algorithm: curve25519-sha256@libssh.org  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug3: send packet: type 30  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug3: receive packet: type 31  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:eyPiBvKLgJOk1xJc0k6cx9UnwIXbUUaXu9pPHTKt5Rg  debug3: hostkeys_foreach: reading file "C:\\Users\\m3/.ssh/known_hosts"  debug3: record_hostkey: found key type ECDSA in file C:\\Users\\m3/.ssh/known_hosts:1  debug3: load_hostkeys: loaded 1 keys from 192.168.1.11  debug3: Failed to open file:C:/Users/m3/.ssh/known_hosts2 error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_known_hosts error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_known_hosts2 error:2  debug1: Host '192.168.1.11' is known and matches the ECDSA host key.  debug1: Found key in C:\\Users\\m3/.ssh/known_hosts:1  debug3: send packet: type 21  debug2: set_newkeys: mode 1  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug3: receive packet: type 21  debug1: SSH2_MSG_NEWKEYS received  debug2: set_newkeys: mode 0  debug1: rekey after 134217728 blocks  debug3: unable to connect to pipe \\\\.\\pipe\\openssh-ssh-agent, error: 2  debug1: pubkey_prepare: ssh_get_authentication_socket: No such file or directory  debug2: key: C:\\Users\\m3/.ssh/id_rsa (0000000000000000)  debug2: key: C:\\Users\\m3/.ssh/id_dsa (0000000000000000)  debug2: key: C:\\Users\\m3/.ssh/id_ecdsa (0000000000000000)  debug2: key: C:\\Users\\m3/.ssh/id_ed25519 (0000000000000000)  debug2: key: C:\\Users\\m3/.ssh/id_xmss (0000000000000000)  debug3: send packet: type 5  debug3: receive packet: type 7  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>  debug3: receive packet: type 6  debug2: service_accept: ssh-userauth  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug3: send packet: type 50  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey,password  debug3: start over, passed a different list publickey,password  debug3: preferred publickey,keyboard-interactive,password  debug3: authmethod_lookup publickey  debug3: remaining preferred: keyboard-interactive,password  debug3: authmethod_is_enabled publickey  debug1: Next authentication method: publickey  debug1: Trying private key: C:\\Users\\m3/.ssh/id_rsa  debug3: no such identity: C:\\Users\\m3/.ssh/id_rsa: No such file or directory  debug1: Trying private key: C:\\Users\\m3/.ssh/id_dsa  debug3: no such identity: C:\\Users\\m3/.ssh/id_dsa: No such file or directory  debug1: Trying private key: C:\\Users\\m3/.ssh/id_ecdsa  debug3: no such identity: C:\\Users\\m3/.ssh/id_ecdsa: No such file or directory  debug1: Trying private key: C:\\Users\\m3/.ssh/id_ed25519  debug3: no such identity: C:\\Users\\m3/.ssh/id_ed25519: No such file or directory  debug1: Trying private key: C:\\Users\\m3/.ssh/id_xmss  debug3: no such identity: C:\\Users\\m3/.ssh/id_xmss: No such file or directory  debug2: we did not send a packet, disable method  debug3: authmethod_lookup password  debug3: remaining preferred: ,password  debug3: authmethod_is_enabled password  debug1: Next authentication method: password  debug3: failed to open file:C:/dev/tty error:3  debug1: read_passphrase: can't open /dev/tty: No such file or directory  uconn@192.168.1.11's password:  debug3: send packet: type 50  debug2: we sent a password packet, wait for reply  debug3: receive packet: type 52  debug1: Authentication succeeded (password).  Authenticated to 192.168.1.11 ([192.168.1.11]:22).  debug1: channel 0: new [client-session]  debug3: ssh_session2_open: channel_new: 0  debug2: channel 0: send open  debug3: send packet: type 90  debug1: Requesting no-more-sessions@openssh.com  debug3: send packet: type 80  debug1: Entering interactive session.  debug1: pledge: network  debug1: console supports the ansi parsing  debug3: Successfully set console output code page from:437 to 65001  debug3: Successfully set console input code page from:437 to 65001  debug3: receive packet: type 80  debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0  debug3: receive packet: type 91  debug2: channel_input_open_confirmation: channel 0: callback start  debug3: unable to connect to pipe \\\\.\\pipe\\openssh-ssh-agent, error: 2  debug1: ssh_get_authentication_socket: No such file or directory  debug2: fd 3 setting TCP_NODELAY  debug2: client_session2_setup: id 0  debug2: channel 0: request pty-req confirm 1  debug3: send packet: type 98  debug2: channel 0: request shell confirm 1  debug3: send packet: type 98  debug2: channel_input_open_confirmation: channel 0: callback done  debug2: channel 0: open confirm rwindow 0 rmax 32768  debug3: receive packet: type 99  debug2: channel_input_status_confirm: type 99 id 0  debug2: PTY allocation request accepted on channel 0  debug2: channel 0: rcvd adjust 2097152  debug3: receive packet: type 99  debug2: channel_input_status_confirm: type 99 id 0  debug2: shell request accepted on channel 0  Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.4.0-206-generic i686)     * Documentation:  https://help.ubuntu.com   * Management:     https://landscape.canonical.com   * Support:        https://ubuntu.com/advantage    0 packages can be updated.  0 of these updates are security updates.    New release '18.04.5 LTS' available.  Run 'do-release-upgrade' to upgrade to it.      Last login: Tue Mar 23 14:22:05 2021 from 192.168.1.52  

SSH by public IP is bad

However, when using its public IP address, I run into an error:

ssh_exchange_identification: Connection closed by remote host

C:\Users\m3>ssh -vvvvA uconn@11.111.11.111  OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5  debug3: Failed to open file:C:/Users/m3/.ssh/config error:2  debug3: Failed to open file:C:/ProgramData/ssh/ssh_config error:2  debug2: resolve_canonicalize: hostname 11.111.11.111 is address  debug2: ssh_connect_direct: needpriv 0  debug1: Connecting to 11.111.11.111 [11.111.11.111] port 22.  debug1: Connection established.  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_rsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_rsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_rsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_dsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_dsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_dsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ecdsa type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ecdsa-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ecdsa-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519 error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ed25519 type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_ed25519-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_ed25519-cert type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_xmss type -1  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss-cert error:2  debug3: Failed to open file:C:/Users/m3/.ssh/id_xmss-cert.pub error:2  debug1: key_load_public: No such file or directory  debug1: identity file C:\\Users\\m3/.ssh/id_xmss-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_for_Windows_7.7  ssh_exchange_identification: Connection closed by remote host  

How to debug

What could be the cause? How can I debug the issue?

Router port forwarding

The server has the private IP address. But there is a router, with public IP address, which forwards the SSH 22 port to the private IP address.

Routher port forwarding

sshd log

Suggested here, I used this command on server to log sshd output:

$ tail -f -n 500 /var/log/auth.log | grep 'sshd'  

When I run ssh uconn@192.168.1.11 on client I get the following log:

Mar 23 17:26:10 server-homeshine sshd[1355]: Accepted password for uconn from 192.168.1.52 port 53107 ssh2  Mar 23 17:26:10 server-homeshine sshd[1355]: pam_unix(sshd:session): session opened for user uconn by (uid=0)  

But when I run ssh uconn@11.111.11.111 on client, no log is displayed whatsoever. I think it is implied that the router is not forwarding the 22 port when a public IP address is used. Not sure why.

SSHD config

sshd config on server is:

uconn@server-homeshine:/etc/ssh$ cat sshd_config  # Package generated configuration file  # See the sshd_config(5) manpage for details    # What ports, IPs and protocols we listen for  Port 22  # Use these options to restrict which interfaces/protocols sshd will bind to  ListenAddress ::  ListenAddress 0.0.0.0  Protocol 2  # HostKeys for protocol version 2  HostKey /etc/ssh/ssh_host_rsa_key  HostKey /etc/ssh/ssh_host_dsa_key  HostKey /etc/ssh/ssh_host_ecdsa_key  HostKey /etc/ssh/ssh_host_ed25519_key  #Privilege Separation is turned on for security  UsePrivilegeSeparation yes    # Lifetime and size of ephemeral version 1 server key  KeyRegenerationInterval 3600  ServerKeyBits 1024    # Logging  SyslogFacility AUTH  LogLevel INFO    # Authentication:  LoginGraceTime 120  PermitRootLogin prohibit-password  StrictModes yes    RSAAuthentication yes  PubkeyAuthentication yes  #AuthorizedKeysFile     %h/.ssh/authorized_keys    # Don't read the user's ~/.rhosts and ~/.shosts files  IgnoreRhosts yes  # For this to work you will also need host keys in /etc/ssh_known_hosts  RhostsRSAAuthentication no  # similar for protocol version 2  HostbasedAuthentication no  # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication  #IgnoreUserKnownHosts yes    # To enable empty passwords, change to yes (NOT RECOMMENDED)  PermitEmptyPasswords no    # Change to yes to enable challenge-response passwords (beware issues with  # some PAM modules and threads)  ChallengeResponseAuthentication no    # Change to no to disable tunnelled clear text passwords  #PasswordAuthentication yes    # Kerberos options  #KerberosAuthentication no  #KerberosGetAFSToken no  #KerberosOrLocalPasswd yes  #KerberosTicketCleanup yes    # GSSAPI options  #GSSAPIAuthentication no  #GSSAPICleanupCredentials yes    X11Forwarding yes  X11DisplayOffset 10  PrintMotd no  PrintLastLog yes  TCPKeepAlive yes  #UseLogin no    #MaxStartups 10:30:60  #Banner /etc/issue.net    # Allow client to pass locale environment variables  AcceptEnv LANG LC_*    Subsystem sftp /usr/lib/openssh/sftp-server    # Set this to 'yes' to enable PAM authentication, account processing,  # and session processing. If this is enabled, PAM authentication will  # be allowed through the ChallengeResponseAuthentication and  # PasswordAuthentication.  Depending on your PAM configuration,  # PAM authentication via ChallengeResponseAuthentication may bypass  # the setting of "PermitRootLogin without-password".  # If you just want the PAM account and session checks to run without  # PAM authentication, then enable this but set PasswordAuthentication  # and ChallengeResponseAuthentication to 'no'.  UsePAM yes  

IP tables

Here is the IP tables on server:

$ sudo iptables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT  
$ sudo ip6tables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT  

Routing table

Server routing table:

$ sudo route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 enp9s0  192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 enp9s0  

Wireshark/Tshark

Installing tshark on server and examining the network packets, shows that when running ssh uconn@192.168.1.11 (private IP) on a client, the SSH packets are received by server.

But when running ssh uconn@11.111.11.111 (public IP) on a client, no SSH packet is received by server whatsoever.

The conclusion is that the ADSL router is not forwarding SSH packets to the server.

Double VPN not working

Posted: 24 Mar 2021 09:14 PM PDT

I have setup OpenVpn in my raspberry pi and it works correctly, I can log in to my raspberry pi from my cellphone. I also installed a paid VPN (Windscribe) to my raspberry pi too. The problem comes when I activate my paid vpn (windscribe) with windscribe connect In my raspberry pi. After that I can no longer reach my raspberry with my cellphone.

I want OpenVPN(PiVPN) so I can access my home network, and I want windscribe vpn also active to safetly browse internet. Right now I have the first part. I can access my home network when windscribe is not active.

I've been trying a lot with the iptables with no success, crating forward rules for interfaces, tunnels, and a lot of combinations, but nothing seems to work. At the end I reset everything.

here are my configurations.

sudo iptables -t nat -S  -P PREROUTING ACCEPT  -P INPUT ACCEPT  -P POSTROUTING ACCEPT  -P OUTPUT ACCEPT  -A POSTROUTING -s 10.8.0.0/24 -o wlan0 -m comment --comment openvpn-nat-rule -j MASQUERADE  -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE    pi@raspberrypi:~ $ sudo iptables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT DROP  -A OUTPUT ! -o tun+ -p tcp -m tcp --dport 53 -j DROP  -A OUTPUT ! -o tun+ -p udp -m udp --dport 53 -j DROP  -A OUTPUT -d 192.168.0.0/16 -j ACCEPT  -A OUTPUT -d 10.0.0.0/8 -j ACCEPT  -A OUTPUT -d 172.16.0.0/12 -j ACCEPT  -A OUTPUT -d 104.20.26.217/32 -j ACCEPT  -A OUTPUT -d 104.20.27.217/32 -j ACCEPT  -A OUTPUT -d 172.67.17.175/32 -j ACCEPT  -A OUTPUT -d 104.21.93.29/32 -j ACCEPT  -A OUTPUT -d 172.67.203.127/32 -j ACCEPT  -A OUTPUT -d 104.21.53.216/32 -j ACCEPT  -A OUTPUT -d 172.67.219.39/32 -j ACCEPT  -A OUTPUT -d 172.67.189.40/32 -j ACCEPT  -A OUTPUT -d 104.21.65.74/32 -j ACCEPT  -A OUTPUT -o tun+ -j ACCEPT  -A OUTPUT -d 127.0.0.1/32 -j ACCEPT  -A OUTPUT -d 209.58.129.121/32 -j ACCEPT    pi@raspberrypi:~ $ ifconfig  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.0.111  netmask 255.255.255.0  broadcast 192.168.0.255          ether b8:27:eb:ec:6a:4b  txqueuelen 1000  (Ethernet)          RX packets 19989  bytes 21885907 (20.8 MiB)          RX errors 160  dropped 4  overruns 0  frame 0          TX packets 11508  bytes 1206589 (1.1 MiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536          inet 127.0.0.1  netmask 255.0.0.0          loop  txqueuelen 1000  (Local Loopback)          RX packets 618  bytes 201828 (197.0 KiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 618  bytes 201828 (197.0 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500          inet 10.8.0.1  netmask 255.255.255.0  destination 10.8.0.1          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 0  bytes 0 (0.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500          inet 10.120.138.29  netmask 255.255.254.0  destination 10.120.138.29          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)          RX packets 164  bytes 32755 (31.9 KiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 961  bytes 114896 (112.2 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500          ether b8:27:eb:b9:3f:1e  txqueuelen 1000  (Ethernet)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 0  bytes 0 (0.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    pi@raspberrypi:~ $ ip route list  0.0.0.0/1 via 10.120.138.1 dev tun1  default via 192.168.0.1 dev eth0 src 192.168.0.111 metric 202  10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1  10.120.138.0/23 dev tun1 proto kernel scope link src 10.120.138.29  128.0.0.0/1 via 10.120.138.1 dev tun1  192.168.0.0/24 dev eth0 proto dhcp scope link src 192.168.0.111 metric 202  209.58.129.121 via 192.168.0.1 dev eth0    pi@raspberrypi:~ $ ip rule list  0:      from all lookup local  32766:  from all lookup main  32767:  from all lookup default  

UPDATE: I found this tutorial, and it helped me alot comparitech.com/blog/vpn-privacy/raspberry-pi-vpn . But I found that when I set this 2 rules

ip rule add from 192.168.1.2 lookup 101   ip route add default via 192.168.1.1 table 101   

I can access the vpn but I can no longer ping my vpn server like before with 192.168.0.111, Now I have to use 10.8.0.1. Any Ideas how to enable ping to 192.168.0.111 – tseres 12 mins ago Delete

DELL PERC status needs attention

Posted: 24 Mar 2021 09:57 PM PDT

if anyone is familiar with Dell PowerEdge Raid Controllers (PERC) that are in servers,

I have a PERC H740P that when you go F2 into the BIOS then navigate to the PERC device it says STATUS: needs attention.

does anyone know how to figure out what exactly needs attention? Because I have cleared configuration and have tried to clear everything yet the thing always seems to say needs attention. Can someone tell me what might be causing this? I've cycled power, held the power button down on the server when power has been disconnected, and have created new virtual disks and everything works fine it just always says needs attention.

Beginner: Ansible The offending line appears to be

Posted: 24 Mar 2021 06:04 PM PDT

I'm learning how to use ansible and am writing a playbook for my local desktop. I'm using the atom editor and have linter installed. I'm not getting any error whilst writing, but then when I execute the playbook I get the error "The offending line appears to be"

    Here's my current Playbook:      ---  - hosts: localhost    tasks:            - name: Install .deb packages from the internet.            apt:            deb:            - https://packagecloud.io/AtomEditor/atom/any/            - https://updates.signal.org/desktop/apt            - http://ppa.launchpad.net/webupd8team/brackets/ubuntu            - http://ppa.launchpad.net/nextcloud-devs/client/ubuntu            - http://repository.spotify.com stable non-free            - http://download.xnview.com/XnConvert-linux-x64.deb            - https://updates.signal.org/desktop/apt xenial main                  - name: Install a list of packages            update_cache: yes            apt:              pkg:              - AtomEditor              - brackets              - calibre              - chromium-browser              - filezilla              - firefox-locale-de              - gimp              - gparted              - gscan2pdf              - gstreamer1.0-pulseaudio              - keepassxc              - nextcloud-client              - nextcloud-client-nautilus              - pdfshuffler              - python-nautilus              - spotify              - tipp10              - vlc              - XnConvert                - name: no tracking            become: true            vars:              packages_absent:                - apport                - gnome-intial-setup                - ubuntu-web-launchers              - name: Remove useless packages from the cache            apt:            autoclean: yes              - name: Remove dependencies that are no longer required            apt:            autoremove: yes  

Then my terminal tells me:

The offending line appears to be:      tasks:      - name: no tracking        ^ here  

I know it's a beginners question and probably there are many more problems in my playbook. But I'm happy for any help.

Encrypt at rest existing AWS EFS instances - is it possible?

Posted: 24 Mar 2021 05:51 PM PDT

Based on my understanding of AWS documentation it appears that the only way to encrypt at rest existing EFS instances with some data is to create new EFS instances with encryption enabled and copy the files from unencrypted EFS to encrypted EFS and alter mount points if any.

Can anybody confirm that is the case?

Automatic installation of updates on Windows Server 2019

Posted: 24 Mar 2021 10:57 PM PDT

On a freshly-installed, non-domain-joined Windows Server 2019 (with desktop experience) VM, the ability to change Windows Update installation settings seems to have vanished, with the "Some settings are managed by your organization" message:

Windows Update settings showing settings disabled

Viewing the configured update policies shows two set on the device, both with a type of Group Policy:

  • Download the updates automatically and notify when they are ready to be installed
  • Set automatic update options

However, running rsop and gpresult both (as expected) show no group policy objects applied. (It's a standalone system, so no domain policy applies.)

Is this expected?

Amazon also acknowledge this (https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/common-messages.html#some-settings-managed-by-org) for their 2019 EC2 images, but it seems odd that using gpedit.msc is the only mechanism for enabling automatic update installation.

OpenVPN use another port if default is blocked

Posted: 24 Mar 2021 10:07 PM PDT

I have two OpenVPN servers listening on two different ports. The first one, which the client should try to connect to, is listening on default port 1194 UDP. The second one, used if networks have firewall, is listening on port 443 via TCP.

How to configure the client file correctly if:

  1. The client should try the faster 1194 UDP-server first and
  2. if the port 1194 is blocked switch to server 2 with port 443 over TCP after 5 seconds.

Currently I've added the following two lines to my client config, but the client only switches to server 2 after 2 minutes with error

TCP: connect to [AF_INET]SERVERIP:1194 failed: Unknown error

My current lines in the config:

remote serverip1 1194  remote serverip2 443  keepalive 2 6  resolv-retry 2  

How to make this timeout shorter?

Apache never wants to run my python scripts

Posted: 24 Mar 2021 08:01 PM PDT

Environment: Ubuntu 16, Apache 2.4

The last three times I tried to setup Apache to serve up Python I run into the same problem: Apache wants to let me download the script instead of running it and serving the html in the browser. Each time I figure it out, it turns out to be a combination of things I can never quite figure out.

For Simplicity:

I have the following shebang at the top of my file: #!/usr/bin/env python2.7

Ran sudo apt-get install libapache2-mod-python

Running a2enmod python returns Module python already enabled

Added all sorts of apache2.conf directives, none of them work:

<Directory /var/www/html>       Options Indexes FollowSymLinks       Options ExecCGI       AllowOverride None       Order allow,deny       Allow from all       AddHandler cgi-script .cgi  </Directory>  

According to this link, this is all that should be required:

<Directory /srv/www/yoursite/public_html>       Options +ExecCGI       AddHandler cgi-script .py  </Directory>  

And, this one works on another machine but not the present:

<Directory /var/www/>       Options +ExecCGI       AddHandler cgi-script .py       PythonHandler mod_python.publisher       Options Indexes FollowSymLinks       AllowOverride None       Require all granted  </Directory>    <Directory /var/www/html/index.py>       Options +ExecCGI  </Directory>  

*Sorry if these apache2.conf's look like a mess, or have redundant lines, I was trying anything and everything.

Running sudo a2enmod cgi returns:

Your MPM seems to be threaded. Selecting cgid instead of cgi. Module cgid already enabled

The .py scripts are executable and are owned by www-data.

Please HELP! This is so frustrating. What have I not tried? I want this to be the last time I have to do this. Every time I approach Apache, I approach it with fear; maybe it can smell it.



FYI: I have tried all of the (troubleshooting) steps at these sites (and many others):

https://www.linux.com/blog/configuring-apache2-run-python-scripts

https://stackoverflow.com/questions/28138997/cgi-script-downloads-instead-of-running?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa

Why do I get error, Invalid command 'PythonHandler'?

https://superuser.com/questions/174071/apache-serves-py-as-downloadable-file-instead-of-executing-it?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa (returns errror with their solution)

Apache not handling python scripts (*.py) via browser



UPDATE

So, here it is. I used test.py in /var/www/html/ so as not to change the source directory. Maybe something is wrong with my browser or the way I am accessing it?

$ ls -ltr /var/www/html/test.py  -rwxr-xr-x 1 www-data www-data    71 Mar 31 17:29 test.py    $ cat /var/www/html/test.py   #!/usr/bin/python2    print("Content-type:text/html\n\n")  print("hello")    $ grep -A 3 '<Directory /var/www/html/>'  /etc/apache2/apache2.conf  <Directory /var/www/html/>       Options +ExecCGI       AddHandler cgi-script .py  </Directory>    $ ./test.py  Content-type:text/html      hello  

I looked in the log files. Nothing odd in access.log, but error.log had an error about mismatched Python versions. According to this link, it shouldn't be a problem. Regardless, I resolved the error by running:

$ apt-get remove libapache2-mod-python libapache2-mod-wsgi  $ apt-get build-dep libapache2-mod-python libapache2-mod-wsgi  

Proof: Script downloading

Still no luck. I noticed there was an apache2 service running even after it was stopped. Thinking it may be a zombie process, I terminated all processes (it kept popping up), uninstalled and purged apache2, restarted and tried again.

how to setup multiple static IP on WAN in pfSense

Posted: 24 Mar 2021 11:01 PM PDT

I have from my ISP a static address (say 70.10.170.100) as well as a range of 2 hosts 69.169.20.120/30. I get handed the first static address when I connect via PPPoE.

I have added a virtual IP entry in pfSense | Firewall: proxy ARP with the 69.169.20.120/30 range. From the outside I can ping one of my hosts in the range, i.e. ping 69.169.20.121 works.

But I cannot figure out how to setup a NAT port forward so that I can reach an internal webserver from the outside over one of the two range host IP addresses? I don't want to use NAT 1:1 because I mostly want to have multiple port 80 to the same web server.

Domain Controller not auto enrolling Kerberos Certificate from new 2016 CA

Posted: 24 Mar 2021 05:31 PM PDT

I migrated a Windows 2008 R2 DC and Enterprise Root CA to a new Windows 2016 DC and CA. Everything seemed stable except I had a few RODCs and writeable DCs that were showing "Failed Requests" in the CA for their auto enrollment of the KerberosAuthentication Certificate.

The error is:

Event ID: 13

Certificate enrollment for Local system failed to enroll for a KerberosAuthentication certificate with request ID 1052 from CAServer.domain.com\domain-CAServer-CA (The RPC server is unavailable. 0x800706ba (WIN32: 1722)).

Along with:

Event ID: 6

Automatic certificate enrollment for local system failed (0x800706ba) The RPC server is unavailable.

All other auto enrollments work from these DCs, and most of the DCs do not exhibit this behavior, enrolling just fine for all certs including the KerberosAuthentication Certificate.

What is causing these particular clients to fail auto-enrolling just this KerberosAuthentication Certificate?

AD printer installs require admin rights

Posted: 24 Mar 2021 11:01 PM PDT

I'm working on overhauling how we manage printers in one environment. We have a print server where folks can hit \\servername and select the printer(s) desired. This works perfectly - no UAC prompts, trust prompts, etc.

I'd like to make use of the "Add Printer / Device" (DevicePairingWizard.exe) wizard so we can have multiple print servers display in the same window. It opens, but when a non admin attempts to add a printer it produces a UAC prompt. It only happens on printers that I don't already have a driver installed for.

This is specifically "Change Printing Settings" and references printui.exe \\servername\printer. If I cancel out it produces a 0x00000bcb for a missing driver.

The clients are Windows 10. Print server 2008r2, domain controllers 2012r2, domain level 2012r2, functional level the same.

I think the point and click restrictions are OK (based on the direct access to shares working) but the settings are as such, and defined under the computer object.

screenshot

What am I doing wrong here?

EDIT:

I have tried the "Disabled" point and print restrictions as @Ruscal and @yagmoth555 suggest without impact.

When I run the executable directly and without any arguments it still comes back with the UAC prompt. Looks like it's something embedded in the executable itself.

Looking at what this executable is supposed to do it should just pass through the command to the associated rundll command.

This fails with a UAC prompt.

C:\windows\system32\printui.exe /gm /in /n "\\printserver.mydom.com\canon1"  

This is the command (captured by sysinternals procmon) it executes when run with admin rights. This command will run correctly and install the printer even without elevated permissions.

rundll32 printui.dll,PrintUIEntry /gm /in /n "\\printserver.mydom.com\canon1"  

proxy_fcgi:error (70008)Partial results are valid but processing is incomplete. AH01075

Posted: 24 Mar 2021 10:49 PM PDT

I have a server running with:

  • Ubuntu 16.04
  • Apache 2.4.18
  • WORKER-MPM
  • PHP 7.0.8-0ubuntu0.16.04.3
  • PHP-FPM
  • OPcache 7.0.8-0ubuntu0.16.04.3

On the browser there is an ajax script that each 5 sec sends a query to a php file to update a timestamp on the DB, this script works well on other servers, but here with not so many users it log the following error:

[Mon Dec 05 09:11:39.575035 2016] [proxy_fcgi:error] [pid 7831:tid 140159538292480] (70008)Partial results are valid but processing is incomplete: [client 172.30.197.200:64422] AH01075: Error dispatching request to : (reading input brigade), referer: http://10.200....file.php

I have no idea what it is and how to fix it. I have searched the entire web and I didn't find much, any hint would be appreciated.

Edit 1:

I switch the error mode to debug and the full log for the error is this:

[Wed Dec 07 08:55:13.465599 2016] [authz_core:debug] [pid 5461:tid 139687427467008] mod_authz_core.c(809): [client 172.31.42.163:54432] AH01626: authorization result of Require all granted: granted, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465613 2016] [authz_core:debug] [pid 5461:tid 139687427467008] mod_authz_core.c(809): [client 172.31.42.163:54432] AH01626: authorization result of <RequireAny>: granted, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465634 2016] [proxy:debug] [pid 5461:tid 139687427467008] mod_proxy.c(1160): [client 172.31.42.163:54432] AH01143: Running scheme unix handler (attempt 0), referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465640 2016] [proxy_fcgi:debug] [pid 5461:tid 139687427467008] mod_proxy_fcgi.c(879): [client 172.31.42.163:54432] AH01076: url: fcgi://localhost/var/www/html/sala.server.php proxyname: (null) proxyport: 0, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465652 2016] [proxy_fcgi:debug] [pid 5461:tid 139687427467008] mod_proxy_fcgi.c(886): [client 172.31.42.163:54432] AH01078: serving URL fcgi://localhost/var/www/html/sala.server.php, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465658 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2160): AH00942: FCGI: has acquired connection for (*)

[Wed Dec 07 08:55:13.465663 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2213): [client 172.31.42.163:54432] AH00944: connecting fcgi://localhost/var/www/html/sala.server.php to localhost:8000, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465668 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2250): [client 172.31.42.163:54432] AH02545: fcgi: has determined UDS as /run/php/php7.0-fpm.sock, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465735 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2422): [client 172.31.42.163:54432] AH00947: connected /var/www/html/sala.server.php to httpd-UDS:0, referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.465771 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2701): AH02823: FCGI: connection established with Unix domain socket /run/php/php7.0-fpm.sock (*)

[Wed Dec 07 08:55:13.480503 2016] [proxy_fcgi:error] [pid 5461:tid 139687427467008] (70008)Partial results are valid but processing is incomplete: [client 172.31.42.163:54432] AH01075: Error dispatching request to : (reading input brigade), referer: http://10.200.200.214/sala.php?sala=Unica

[Wed Dec 07 08:55:13.480533 2016] [proxy:debug] [pid 5461:tid 139687427467008] proxy_util.c(2175): AH00943: FCGI: has released connection for (*)

How to make sure that user have recursive permission create folders and files

Posted: 24 Mar 2021 07:06 PM PDT

I have user gitlab-runner which is running CI and basically whenever I push something to gitlab repository it will build the project and then copy it to /var/www/stanislavromanov.com.

The problem is that it has no permission to do so.

Error

$ cp -R ./build/* /var/www/stanislavromanov.com/  cp: cannot create regular file '/var/www/stanislavromanov.com/404.html': Permission denied  cp: cannot create directory '/var/www/stanislavromanov.com/blog': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/ci.log': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/favicon.ico': Permission denied  cp: cannot create directory '/var/www/stanislavromanov.com/fonts': Permission denied  cp: cannot create directory '/var/www/stanislavromanov.com/img': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/index.html': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/index.xml': Permission denied  cp: cannot create directory '/var/www/stanislavromanov.com/privacy': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/scripts.js': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/sitemap.xml': Permission denied  cp: cannot create regular file '/var/www/stanislavromanov.com/styles.css': Permission denied  ERROR: Build failed: exit status 1  

I have tried this: sudo chown -R gitlab-runner /var/www and this sudo chown -R gitlab-runner:gitlab-runner /var/www.

Still have same error. I am 100% sure that user is gitlab-runner because when I do whoami it shows gitlab-runner.

What am I doing wrong?

I fixed it by setting chmod 777 to the stanislavromanov.com however I believe this is far from optimal solution.

Centos server reboots itself

Posted: 24 Mar 2021 10:07 PM PDT

I am running centos 6.x on my server. It rebooted itself at 10.30am and 11.13am today.

I checked /var/log/messages file and couldn't understand. What can be cause to reboot?

My log file (I couldn't paste it here because character limit):

http://pastebin.com/R9VN3nSJ

Squid Proxy: 400 Bad Request when "%25" (Percent Sign) in URL

Posted: 24 Mar 2021 08:01 PM PDT

I have a squid proxy that works well except for this issue:

If a URL has a %25 inside of it (the percent sign), we get a 400 Bad Request and Bad Request is displayed to the web browser.

Example URL: http://www.amazon.com/25%25-Percent-Off-Stickers-Adhesive/dp/B00J0IBJ0S/

Log:

12/Jan/2016:18:40:28 -0600 429 MY.IP.IS.HERE TCP_MISS/400 310 GET http://www.amazon.com/25%25-Percent-Off-Stickers-Adhesive/dp/B00J0IBJ0S/ - ROUNDROBIN_PARENT/three text/html

I'm not sure if this is a bug or a configuration error. I have a round robin setup as shown above. Here is the output of squid3 -v:

Squid Cache: Version 3.1.19 configure options: '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,' '--enable-digest-auth-helpers=ldap,password' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2' '--disable-translation' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' --with-squid=/build/squid3-FzlLQ3/squid3-3.1.19

uname -a:

Linux MyHostName 3.13.0-44-generic #73~precise1-Ubuntu SMP Wed Dec 17 00:39:15 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

(It's an Ubuntu 12.04.5 LTS server)

The hack with this URL would simply be to have something strip out the %25 but that wouldn't work on all destination websites.

The URL works fine when not going through the proxy.

Thanks for any ideas, I'm willing to provide more config info.

Web proxy with SSH Tunnel to Ubuntu 12.04 with Putty is refusing connections

Posted: 24 Mar 2021 07:06 PM PDT

I'm trying to set up a SSH tunnel via Putty so I can access my router admin pages.

I have an ubuntu 12.04 machine on the local network with SSH access.

I can SSH into that ubuntu box (that sits on the same network as the router i'm trying to get to).

So this is what I'm doing for my SSH tunnel

In Putty.

  • Create an SSH Tunnel Of Auto Location: Dynamic Port, Port Number 9999 (then clicking add)
  • I see the port D9999
  • I then make a connection to the remote machine with Putty (over port 22) and I log in to the remote machine.

In Firefox

I set the connection to socks5 proxy at localhost port 9999

Now when I try and connect to any web site in firefox it says

The proxy server is refusing connections  Firefox is configured to use a proxy server that is refusing connections.  

While I'm SSH'd into the remote box, I can do telnet www.google.com 80 and that connects just fine.

What am I missing?

Domain accounts not visible in ACL on Windows Server 2008

Posted: 24 Mar 2021 06:04 PM PDT

We have two new servers running Windows Server 2008. One is a file server and one a DC. The file server is joined to the domain, but when we attempt to edit NTFS ACLs, domain objects are not available. Any ideas what may be going on?

Setting correct Content-Type sent from Wordpress, on Apache server

Posted: 24 Mar 2021 09:07 PM PDT

I need help pointing me in the right direction for setting the ContentType returned by Apache for content produced by WordPress. I'm having trouble figuring out why WordPress is returning incorrect headers.

Issue

The specific problem is that our Wordpress blog pages are being downloaded as a file rather than displayed by Internet Explorer and Chrome v21.

Content-Type: application/x-gzip is being returned by the server.

I'm told that I should expect Content-Type: text/html.

Background

The URL is http://www.bitesizeirishgaelic.com/blog/.

Shortcut to file permissions

Posted: 24 Mar 2021 09:07 PM PDT

I have two mapped drives on our server (M & P, management and public) with rights for everyone on P and only rights for management on M.

We now have team leads who have to access just a few folders on M. Is it possible to create a shortcut on P to the needed folders on M which would allow the Team Leads to access the just files they need on M? For instance the Team Lead needs access to M:\Ops\Schedules (and the files within the Schedule folder). I would like to create a shortcut on P to M:\Ops\Schedules allowing the Team LEad to open the Schedule folder.

I have tried adding the Team Lead group (which have read and execute permissions) to both the shortcut and the folder where the files reside but I get an error message saying that the user does not have sufficient rights.

TIA,

Brian Enderle

No comments:

Post a Comment