Sunday, August 15, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


What is the meaning of the "/dev/null 2>&1" in a Cronjob entry?

Posted: 15 Aug 2021 10:06 PM PDT

Can someone explain to me what is the meaning of "2>&1" doing here in the below cron job

**0 23 * * * wget -q -O /dev/null "https://example.com/index.php" > /dev/null 2>&1**  

openssl upgrade | fail validating certificate

Posted: 15 Aug 2021 09:44 PM PDT

I am working on CentOS7 machine, and I am trying to upgrade my machine's openssl version 1.0.2k -> 1.1.0l. It seems like the handshake process with my server(which didn't change) fails after the upgrade and I'm trying to figure out the cause.

Running the following command with both openssl version:

openssl s_client -showcerts -connect server:port

Resulted with failure with the newer one (if i provide the -CAfile validation works with both). A diff of the result:

Old 1.0.2k (handshake successful):

Server Temp Key: ECDH, P-256, 256 bits New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 New 1.1.0l (fails handshake):

Server Temp Key: X25519, 253 bits New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Verify return code: 20 (unable to get local issuer certificate) I would appreciate with help understanding the difference, and why are they different.

fyi, I started a similar threat here: https://stackoverflow.com/questions/68763253/openssl-upgrade-fail-validating-certificate?noredirect=1#comment121583146_68763253 without much luck.

Thanks :)

loading additional modules with ansible tower

Posted: 15 Aug 2021 08:55 PM PDT

I'm trying to run a playbook on ansible tower but I'm having issues loading extra modules. I checked the playbook is configured right but it still fails with the message below...

[WARNING]: Invalid characters were found in group names but not replaced, use  -vvvv to see details  ERROR! couldn't resolve module/action 'ansible.windows.win_package'. This often indicates a misspelling, missing collection, or incorrect module path.  The error appears to be in '/tmp/bwrap_371_vfy0csh9/awx_371_vu6g6dfa/project/windows-playbook.yml': line 5, column 7, but may  be elsewhere in the file depending on the exact syntax problem.  The offending line appears to be:      - name: Test Install        ^ here  

I might be blind but how do I get ansible tower to load these modules? I'm not sure if its a setting I've missed or extra config required in the playbook itself... Any help would really be appreciated. I'll pop in my playbook below.

---  - hosts: all      tasks:      - name: Test Install        ansible.windows.win_package:          path: \\FILESHARE\data\Software\Installer.msi          arguments: '/q /norestart'          state: present  

Reason why RHEL Apache is installed on middleware folder

Posted: 15 Aug 2021 08:37 PM PDT

What's on RHEL the cause which Apache (HTTPD) is installed on the folder "/opt/middleware/httpd/", instead the usual installation folder? What should I apply on my server to have this similar feature, since I am trying to replicate a client environment.

Azure SQL DB - Elastic Pool Vs Hyperscale

Posted: 15 Aug 2021 08:04 PM PDT

We have an Azure SQL DB (DTU based, Standard 3, 50 GB). Business requirement is that the size of the DB might grow till 10 TB. We are considering moving to elastic pool to save cost. Hyperscale (Gen5) is another option under consideration. While analyzing, following points have been identified. Kindly suggest taking a right decision.

  1. Storage: Hyperscale can be scaled up to 100 TB. Elastic Pool 8 TB. (Hyperscale takes lead) (Hope that the 8 TB is for the entire pool, not for each DB.)
  2. Storage Cost: It is included for elastic pool. Storage cost is separated in Hyperscale. (EP takes lead)
  3. Total Cost: For 4 TB, Hyperscale costs around USD 1050/month (with 4 vcore, 1 year reserved, compute + storage cost). EP costs around USD 5,500/month for Standard and USD 21,900/month. (Hyperscale takes clear lead)
  4. Cross DB CRUD: Though cross DB CRUD operation achievable in EP, setting it up in EP for multiple DBs is cumbersome and time consuming (with elastic query and sp_execute_remote). ETL jobs need to read-write in all DBs. But in Hyperscale, it is simple and straight forward as it is a single DB. (Hyperscale takes lead)
  5. Switching Tier: It not possible to come out of Hyperscale. But EP can be changed to another tier/purchasing model. (EP takes lead)
  6. Elastic pool is not supported in Hyperscale.
  7. Geo-replication is not an issue.

Seems Hyperscale is better option. Kindly suggest if I have missed any.

Using Local Drive as a Web Server Directory

Posted: 15 Aug 2021 07:35 PM PDT

I'm trying to find the best way to include a local NAS Hard-drive folder in the Web server directory. The Web Server is Apache, running on a Public Debian Server. I've all the flexibility in the world to install/configure packages on both sides.

So, essentially mounting a local folder say /mnt/drive/folder on the NAS, to /var/www/html/remotefolders

Can this be accomplished using SSHFS or is there another way?

Diagram

AWS - Why is my ACM issued cert not appearing when creating a Load Balancer ("No existing certificates")

Posted: 15 Aug 2021 06:54 PM PDT

I'm trying to create a Application Load Balancer for a LAMP stack ec2 server. Both the ec2 server and certificate is deployed in US East(Ohio) us-east-2 and I'm trying create the load balancer there also.

But when I'm setting up the load balancer, and get to the step where I select a ACM managed cert, the dropdown says "No existing certificates".

The certificate is Issued and not In Use. I created it some time ago (actually, about 2 years ago). I also tried creating a Classic Load Balancer and the certificate was not available there also.

Stopping UDP Attack

Posted: 15 Aug 2021 04:14 PM PDT

I am now getting support emails from OVH that there is unusual activity on my server.

This is a simple server that I have RDP connections for students to access QuickBooks, Excel, and Word, and there is nothing else on the server, and I have group policies set that they have almost no access to anything including the internet, files, etc ...

The below is the message I am getting for OVH ... I have blocked all UDP outbound in the windows firewall and the computer configuration ... I am not an expert in this area ... will this stop the unusual behavior.

Attack detail : 4Kpps/53Mbps  dateTime srcIp:srcPort dstIp:dstPort protocol flags packets bytes reason  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:15800 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:703 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 201.71.201.195:41519 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:19103 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 72.204.176.88:8080 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:11396 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 24.217.44.95:80 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 72.204.176.88:8080 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:32431 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:48208 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:7814 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 201.71.202.157:61154 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 87.123.195.143:443 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:22084 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:34101 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:32807 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:60109 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:38144 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:27707 UDP --- 16384 24870912 ATTACK:UDP  2021.08.15 21:56:26 CEST 135.148.34.13:389 67.220.81.64:28195 UDP --- 16384 24870912 ATTACK:UDP  

SSH into GKE Kubernetes cluster?

Posted: 15 Aug 2021 05:16 PM PDT

I have a GKE Kubernetes cluster that I would like to debug.

Is it possible to start a container inside the cluster using e.g. ubuntu image and SSH into it with full privileges, so I will be able to install software inside it with apt and run various debugging commands?

GCP VM Instance dysfunctioning

Posted: 15 Aug 2021 03:13 PM PDT

Currently using a GCP VM instance to run an ODK aggregate server, I cannot access the server since Friday evening. I guess it's not linked to ODK but rather to the server issue, indeed, I followed the following steps:

  • Changed the internet connection and browser, tried to access locally on my computer : no improvement.
  • Checked that the url is still operational on the website where I created it (on freedns.afraid). It's the case.
  • Checked my GCP VM instance menu and parameters (ubuntu-1804-bionic-v20210604, g1-small :1 vCPU, 1,7 GB of memory, 10 GB in the disk storage, Intel Haswell as processor platform, using W10). I didn't identify a reason to explain the problem. But the port script of the last days were signaling errors:

"Aug 13 16:24:27 enquetesouagabobo chronyd[2104]: Could not write to temporary driftfile /var/lib/chrony/chrony.drift.tmp Aug 13 16:39:16 enquetesouagabobo systemd-networkd[19493]: ens4: Configured Aug 13 17:09:17 enquetesouagabobo systemd-networkd[19493]: ens4: Configured [5034594.247692] systemd-journald[19543]: Failed to create new system journal: No space left on device"

I think it's linked to the disk storage, which was indeed full. I have doubled its capacity today afternoon (from 10 GB to 20 GB) but I get the same scripts after that. See for instance : "Aug 15 18:50:55 enquetesouagabobo systemd[1]: snapd.service: Start operation timed out. Terminating. Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: State 'stop-sigterm' timed out. Killing. Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: Killing process 29463 (snapd) with signal SIGKILL. Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: Main process exited, code=killed, status=9/KILL Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: Failed with result 'timeout'. Aug 15 18:52:25 enquetesouagabobo systemd[1]: Failed to start Snap Daemon. Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: Service hold-off time over, scheduling restart. Aug 15 18:52:25 enquetesouagabobo systemd[1]: snapd.service: Scheduled restart job, restart counter is at 949. Aug 15 18:52:25 enquetesouagabobo systemd[1]: Stopped Snap Daemon. Aug 15 18:52:25 enquetesouagabobo systemd[1]: Starting Snap Daemon... Aug 15 18:52:25 enquetesouagabobo snapd[29509]: AppArmor status: apparmor is enabled and all features are available Aug 15 18:52:25 enquetesouagabobo snapd[29509]: AppArmor status: apparmor is enabled and all features are available Aug 15 18:53:56 enquetesouagabobo systemd[1]: snapd.service: Start operation timed out. Terminating."

I don't know what would be better to do now, as I don't master Serial console and command lines : I have already created a persistent disk snapshot and would like to restore the data to a new disk and have access again to my current server.

Do you have any idea ? Can I create a similar Instance VM with the same IP external address and the disk snapshot ?

Thank you in advance for your help.

N.T.

Mysqli access denied , using UNIX socket

Posted: 15 Aug 2021 08:22 PM PDT

I am trying to learn PHP, and I am setting up the database connection.
On Mysql Workbench, I created a database called php, and created a table. Then I created the account "sam"@"localhost",(I am using Ubuntu desktop, and sam is the output of whoami ) with auth_socket, Granted ALL on . , and when I press ctrl+alt+T and input mysql and press enter , I can successfully login.
Now I followed https://www.php.net/manual/en/mysqli.quickstart.connections.php , and tried using socket, I failed.

2021/08/13 16:54:35 [error] 1363#1363: *11 FastCGI sent in stderr: "PHP message: PHP Warning:  mysqli::__construct(): (HY000/1698): Access denied for user 'sam'@'localhost' in /var/www/php/my2.php on line 3PHP message: PHP Warning:  main(): Couldn't fetch mysqli in /var/www/php/my2.php on line 5" while reading response header from upstream, client: 127.0.0.1, server: _, request: "GET /my2.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.4-fpm.sock:", host: "localhost:803"  

And here is my code. <?php

$mysqli = new mysqli("localhost", "sam", Null, "php");    echo $mysqli->host_info . "\n";  

The output is nothing in a browser.

Then I tried this.

<html>     <head>        <title>Connecting MySQL Server</title>     </head>     <body>        <?php           $dbhost = 'localhost';           $dbuser = 'sam';           //$dbpass = 'root@123';           $mysqli = new mysqli($dbhost, $dbuser,NULL,NULL,NULL,"/run/mysqld/mysqld.sock");                      if($mysqli->connect_errno ) {              printf("Connect failed: %s<br />", $mysqli->connect_error);              exit();           }           printf('Connected successfully.<br />');           $mysqli->close();        ?>     </body>  </html>  

The output is Connect failed: Access denied for user 'sam'@'localhost'
With log

2021/08/13 16:59:17 [error] 1363#1363: *13 FastCGI sent in stderr: "PHP message: PHP Warning:  mysqli::__construct(): (HY000/1698): Access denied for user 'sam'@'localhost' in /var/www/php/mysqli.php on line 10" while reading response header from upstream, client: 127.0.0.1, server: _, request: "GET /mysqli.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.4-fpm.sock:", host: "localhost:803"  

I don't have password for user sam on mysql.

MariaDB [mysql]> select user,password from user;  +-----------+-------------------------------------------+  | user      | password                                  |  +-----------+-------------------------------------------+  | root      |                                           |  | someone   | *2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |  | someone   | *9B8A3238012965AC630589D859535EA0B7C231A7 |  | someone   | *AF411B6C73B3AC3A2316A309A71758175CC14FEE |  | someone   | *D76A454D84260E84578F09915624F275F3D0E08B |  | sam       |                                           |  +-----------+-------------------------------------------+  6 rows in set (0.001 sec)  

I changed the actrual username to someone to protect privacy. You can see that sam have no password.

Haproxy: Restrict access to untrusted IPs only to sub url

Posted: 15 Aug 2021 03:05 PM PDT

I would like to restrict access to my pastebin server (I'm using zerobin) so that untrusted IPs can only open secrets, but not create them.

(note that the url https://fakepastebin.com below is just an example for the sake of giving this question some context)

Trusted IPs: allow access to https://fakepastebin.com (main page where they can generate secrets) Non trusted IPs: allow access only to secrets (eg https://fakepastebin.com/?29c6692368e9edc9#G4j8Y2w). Basically anything after https://fakepastebin.com/?*

Something like :

acl trusted-ip src -f /etc/haproxy/whitelist.lst  acl unprotected-pages path_beg ^/..*$  

How do I make it so that the unprotected pages can be accessed by all IPs? I've never tried to limit the main page before...only sub-pages so I'm unsure how to do this. Appreciate the feedbacks!

UPDATE: With these in place I can now prevent untrusted IPs from going to the top level url:

acl url_my_app hdr_dom(Host) -i fakepastebin.com  acl top_level_uri path_reg ^/$  acl app-query query -m reg ^(pasteid=)*[0-9a-zA-Z]{16}$  http-request deny if url_my_app top_level_uri !app-query !trusted_ips  

However I noticed that if I browse to https://fakepastebin.com/foobahshshshhs it will redirect me to the top level uri and I can access it then, which is not what I want :( how can I get haproxy to deny untrusted IPs access to the top level uri https://fakepastebin.com ?

Thanks

Zabbix active agent can't connect to Zabbix server - connection was forcibly closed by the remote host

Posted: 15 Aug 2021 10:06 PM PDT

I am already using active agents on other servers and everything works really nice. I've performed installation of Zabbix agent on new server and I've set the same config as in other active agents. The problem is my agent can't connect to the server.

Logs:

End of zbx_tls_connect():FAIL error:'SSL_connect() I/O error: [0x00002746] An existing connection was forcibly closed by the remote host.'  active check configuration update from [hidden_address:10051] started to fail (TCP successful, cannot establish TLS to [[hidden_address]:10051]: SSL_connect() I/O error: [0x00002746] An existing connection was forcibly closed by the remote host.)  End of refresh_active_checks():FAIL  

I am sure that PSK key and ID is set correctly in both agent and server. My config (works on other agents):

LogFile=C:\Zabbix\zabbix_agentd.log  DebugLevel=5  Server=hidden_address  ListenPort=10051  Hostname=hidden_name  ServerActive=hidden_address  EnableRemoteCommands=1    TLSConnect=psk  TLSAccept=psk  TLSPSKFile=C:\Zabbix\conf\client.txt  TLSPSKIdentity=hidden_id   

Port is opened on both sides and I have checked with Test-NetConnection in Powershell that I can connect from agent to server on specifed port (10051).

Any idea what else I can check or try to do to fix the problem?

Know which firmware my linux kernel has loaded since booting

Posted: 15 Aug 2021 03:08 PM PDT

On routinely updating my Debian system, I've never took the time to pick which firmware packages I do really need; basically I have them all installed, and always up-to-date.

I've been wondering how can I pick which ones I do really need. I was thinking of using every device I have in my system (even the ones I rarely use like bluetooth, ethernet, camera, touchpad, multimedia keys and so on) and look at the list of loaded firmware.

Is there an easy way to find out which firmware is currently loaded, or were loaded since last kernel boot?

Waiting for localhost : getting this message on all browsers

Posted: 15 Aug 2021 09:08 PM PDT

I am using Ubuntu 14.04 and have php5 and mysql installed. I have 3 web applications on my /var/www/html folder. Until yesterday evening I was able to test and work on the applications. All of a sudden, I am not able to load any of my applications on any of the browsers. I have firefox and chrome installed.

I have checked the availability of MySQL and Apache. Both are running correctly. I have also restarted Apache. I have cleared all the cookies and history from chrome and set it to default under chrome://flags.

After removing all the history and cookies from Chrome, I could load the first login page and when I provide the UID and password, I get Waiting for localhost and the page is stalled.

Of the three one of my smaller application loaded after 10 minutes, however a heavier application did not load at all. However, the browser loads plain html files.

I have also tested on wifi, mobile internet dongle device and ethernet and there are no firewall issues. I have also cleared my machine's cache by

sudo /etc/init.d/dns-clean restart  

None of this helped. Can someone guide me on how do I resolve this?

NGINX with proxy_pass behind AWS ALB, creating http://example.com:443 urls from https links - “The plain HTTP request was sent to HTTPS port”

Posted: 15 Aug 2021 06:04 PM PDT

I currently have nginx running behind AWS Application Load Balancer. I have a ghost blog on another server which I have setup using proxy_pass. It works perfectly if I go to https://www.example.com/blog

However, I have a link to https://www.example.com/blog on my homepage, but when I click on it it seems to 301 redirect me to http://www.example.com:443/blog resulting in "The plain HTTP request was sent to HTTPS port"

The site is also setup to 301 HTTP to HTTPS. This appears to work flawlessly.

ALB is taking care of my SSL certs. To keep it simple I have the ALB setup with two listeners (80 and 443) but only one process (80). I previously had 443 setup as another process but have removed it to reduce potential failure points.

I'm at a loss as to why it would be 301'ing a perfectly good url by turning it into HTTP on port 443 when in all other cases it appears to turn HTTP into HTTPS.

Some suggested answers were to add listener 443 ssl; to the nginx.conf but I cant do that as no SSL certs are setups on nginx. It's all on ALB.

worker_processes  1;    events {      worker_connections  1024;  }    http {  include       mime.types;  default_type  application/octet-stream;  sendfile        on;  keepalive_timeout  65;    server{      listen 80;      listen 443;       server_name example.com;        return 301 https://www.example.com;    }    server {      listen 80;      listen 443;       server_name www.example.com;        if ($http_x_forwarded_proto = 'http'){          return 301 https://$host$request_uri;      }        sendfile on;        default_type application/octet-stream;        gzip on;      gzip_http_version 1.1;      gzip_disable      "MSIE [1-6]\.";      gzip_min_length   256;      gzip_vary         on;      gzip_proxied      expired no-cache no-store private auth;      gzip_types        text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;      gzip_comp_level   9;        root /usr/share/nginx/html;        location / {          try_files  $uri $uri/ /blog/$uri;      }        location /blog/ {          proxy_pass https://ip.address;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;        }    }    include servers/*;  }  

Any ideas??

Network problems when I create Beanstalk environments from an AMI

Posted: 15 Aug 2021 04:03 PM PDT

I'm using AWS elastic beanstalk web interface to create an environment based on an existing AMI that has our application deployed on it.

The environment gets created, the app is accessible via the ec2 instance's IP. however the environment's health keeps as "Pending" for 15 minutes then degrades to Severe after that with these errors in the environment's log:

2017-10-22 15:57:50 UTC+0300 INFO Launched environment: Winfooztest->env-6. However, there were issues during launch. See event log for >details.

2017-10-22 15:57:49 UTC+0300 ERROR The EC2 instances failed to >communicate with AWS Elastic Beanstalk, either because of configuration >problems with the VPC or a failed EC2 instance. Check your VPC >configuration and try launching the environment again. 2017-10-22 15:57:49 UTC+0300 ERROR Stack named 'awseb-e-ypy7mg2pta->stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The >following resource(s) failed to create[AWSEBInstanceLaunchWaitCondition].

2017-10-22 15:56:16 UTC+0300 WARN Environment health has >transitioned from Pending to Severe. Initialization in progress (running >for 16 minutes). None of the instances are sending data.

2017-10-22 15:41:48 UTC+0300 INFO Created CloudWatch alarm named: >awseb-e-ypy7mg2pta-stack-AWSEBCloudwatchAlarmHigh-QVXFWC3HZS5S

So what I understood here is that the instance is created, but it's failing to communicate with elastic beanstalk. In contrast to common security sense, and in order to pinpoint the problem, I've tried to keep my VPC setting as public as possible. Here is what I did:

VPC type: Created a "VPC with a single public subnet"

IPv4 CIDR block: 10.0.0.0/16

Public subnet's IPv4 CIDR: 10.0.0.0/24

Visibility: public

Checked the option to have a public IP address for the VPC

Security group - Inbound: ALL Traffic|ALL|ALL|0.0.0.0/0

Security group - Outbound: ALL Traffic|ALL|ALL|0.0.0.0/0

Environment is configured to use a load balancer.

No luck.

I know there is a small networking tweak that I need to do. I've scratched my head (and my search engine) a lot. What am I missing? Can you help?

Setup ssl on nginx for a django project

Posted: 15 Aug 2021 03:03 PM PDT

I want to setup ssl for Nginx, my project is a Django and I also use gunicorn as wsgi Http server.

I add following lines in my settings.py code :

CSRF_COOKIE_SECURE = True  SESSION_COOKIE_SECURE = True  

I don't know if it's necessary to do this, then I configure my Nginx in the following form:

server {      listen 80;      server_name <name>;      return 301 https://$host$request_uri;  }    server {      #listen 80;      listen 443 default ssl;      client_max_body_size 4G;        server_name <name>;        #ssl                  on;      ssl_certificate      /etc/nginx/ssl/ssl.crt;      ssl_certificate_key  /etc/nginx/ssl/ssl.key;        ssl_session_timeout  5m;        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;      ssl_ciphers         HIGH:!aNULL:!MD5;      ssl_prefer_server_ciphers   on;      keepalive_timeout 5;    # path for static files      root /home/deploy/;        location /static/ {      }      location /media/ {      }        location / {          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          #proxy_set_header Host $http_host;          proxy_redirect off;          proxy_set_header Host $host;          proxy_pass http://app_server;      }        error_page 500 502 503 504 /500.html;      location = /500.html {          root /home/deploy/static;      }  }  

Nginx configure is correct I think because its redirect 80 to 443,but nothing happens, 80 request sent, then Nginx redirect it to 443, but nothing happen, it can't connect to gunicorn or project.

Should I do something with gunicorn? my certificate is self-signed, or what should I do?

regards :)

HBase Kerberos SaslException: GSS initiate failed (Mechanism level: Failed to find any Kerberos tgt)

Posted: 15 Aug 2021 08:01 PM PDT

I am trying to set up Kerberos authentication for HBase using this http://hbase.apache.org/0.94/book/security.html documentation and have very little progress so far.

HBase 1.1.1 from Apache without any Cloudera influences. Host machine is running under Centos 6.5.

I've already set up Kerberos KDC and client after following instruction https://gist.github.com/ashrithr/4767927948eca70845db KDC is located on the same machine as HBase I'm trying to secure.

All-in-all, here's current environment state: keytab file is here /opt/hbase.keytab

hbase-site.xml contents

<?xml version="1.0"?>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  <configuration>      <property>      <name>hbase.rootdir</name>      <value>file:///opt/hbase-data/hbase</value>    </property>    <property>      <name>hbase.zookeeper.property.dataDir</name>      <value>/opt/hbase-data/zookeeper</value>    </property>      <property>      <name>hbase.cluster.distributed</name>      <value>true</value>    </property>      <property>      <name>hbase.security.authentication</name>      <value>kerberos</value>    </property>    <property>      <name>hbase.security.authorization</name>      <value>true</value>    </property>    <property>      <name>hbase.coprocessor.region.classes</name>      <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>    </property>        <property>      <name>hbase.master.keytab.file</name>      <value>/opt/hbase.keytab</value>    </property>      <property>      <name>hbase.master.kerberos.principal</name>      <value>hbase/_HOST@XXX.MYCOMPANY.COM</value>    </property>      <property>      <name>hbase.regionserver.kerberos.principal</name>      <value>hbase/_HOST@XXX.MYCOMPANY.COM</value>    </property>      <property>      <name>hbase.regionserver.keytab.file</name>      <value>/opt/hbase.keytab</value>    </property>    </configuration>  

It's a pseudo-distributed mode and I didn't bother with undelying HDFS to keep things as simple as possible.

However when I start hbase with ./start-hbase command I get following error in regionserver.log

2015-10-20 17:33:18,068 INFO  [regionserver/xxx.mycompany.com/172.24.4.60:16201] regionserver.HRegionServer: reportForDuty to master=xxx.mycompany.com,16000,1445349909162 with port=16201, startcode=1445349910087 2015-10-20 17:33:18,071 WARN  [regionserver/xxx.mycompany.com/172.24.4.60:16201] ipc.AbstractRpcClient: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 2015-10-20 17:33:18,071 FATAL [regionserver/xxx.mycompany.com/172.24.4.60:16201] ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]          at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)          at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:154)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:732)          at java.security.AccessController.doPrivileged(Native Method)          at javax.security.auth.Subject.doAs(Subject.java:415)          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:732)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854)          at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180)          at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)          at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)          at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)          at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2260)          at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:893)          at java.lang.Thread.run(Thread.java:745) Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)          at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)          at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)          at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)          at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)          at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)          at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)          at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)          ... 18 more 2015-10-20 17:33:18,072 WARN  [regionserver/xxx.mycompany.com/172.24.4.60:16201] regionserver.HRegionServer: error telling master we are up com.google.protobuf.ServiceException: java.io.IOException: Could not set up IO Streams to xxx.mycompany.com/172.24.4.60:16000          at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)          at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)          at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)          at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2260)          at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:893)          at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Could not set up IO Streams to xxx.mycompany.com/172.24.4.60:16000          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854)          at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180)          at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)          ... 5 more Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:677)          at java.security.AccessController.doPrivileged(Native Method)          at javax.security.auth.Subject.doAs(Subject.java:415)          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:635)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:743)          ... 9 more Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]          at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)          at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:154)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:732)          at java.security.AccessController.doPrivileged(Native Method)          at javax.security.auth.Subject.doAs(Subject.java:415)          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)          at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:732)          ... 9 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)          at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)          at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)          at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)          at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)          at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)          at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)          at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)          ... 18 more 2015-10-20 17:33:18,073 WARN  [regionserver/xxx.mycompany.com/172.24.4.60:16201] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.  

I presume Kerberos works because I can obtain

$ klist -ekt hbase.keytab  Keytab name: FILE:hbase.keytab  KVNO Timestamp         Principal  ---- ----------------- --------------------------------------------------------     3 10/19/15 17:11:42 hbase/xxx.mycompany.com@XXX.MYCOMPANY.COM (arcfour-hmac)     3 10/19/15 17:11:42 hbase/xxx.mycompany.com@XXX.MYCOMPANY.COM (des3-cbc-sha1)     3 10/19/15 17:11:42 hbase/xxx.mycompany.com@XXX.MYCOMPANY.COM (des-cbc-crc)        $ kinit -kt /opt/hbase.keytab hbase/xxx.mycompany.com@XXX.MYCOMPANY.COM  [userx1@gms-01 logs]$ klist  Ticket cache: FILE:/tmp/krb5cc_2369  Default principal: hbase/xxx.mycompany.com@XXX.MYCOMPANY.COM    Valid starting     Expires            Service principal  10/20/15 17:49:32  10/21/15 03:49:32  krbtgt/XXX.MYCOMPANY.COM@XXX.MYCOMPANY.COM          renew until 10/27/15 16:49:32  

hbase shell produces the same exception as provided above when trying to run status (or whatever) command

If anyone has any suggestions or advices please let me know

Thanks in advance

Jenkins: Waiting for next available executor on master, 4 workers idle

Posted: 15 Aug 2021 06:04 PM PDT

I have a jenkins (initially 1.596.2, later upgraded to .3) master on Ubuntu, with some jobs.

Last week i started seeing jobs being put on queue (pending—Waiting for next available executor). I checked the job config (Restrict where this project can be run) and it says Slaves in label: 1. The master workers all report idle. I upgraded to 1.596.3, restarted the node, but after a couple of hours of working (around 10-12) it starts to put jobs on queue although workers are idle.

It doesn't have any slaves, there are plenty of resources (node is a VM with 8 GB of RAM and 500 GB disk) and there are no errors in dmesg or logs.

What can i do to unblock it?

Thanks, Ed

How to apply xNetworking xIPAddress Desired State Configuration (DSC)?

Posted: 15 Aug 2021 09:08 PM PDT

Using Windows Server 2012 R2.

The goal is to set the IPv4 address of a server. As DSC correctly states in the verbose message below, the Expected [ip is] 192.168.0.203, [while the] actual [ip is] 192.168.0.205

The following error message:

Start-DscConfiguration -Path .\BasicServer -Verbose -Wait -Force    VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' =  MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'.  VERBOSE: An LCM method call arrived from computer ComputerName with user sid S-1-5-21-139086020-2308268882-217435134-1104.  VERBOSE: [ComputerName]: LCM:  [ Start  Set      ]  VERBOSE: [ComputerName]: LCM:  [ Start  Resource ]  [[xIPAddress]IPAddress]  VERBOSE: [ComputerName]: LCM:  [ Start  Test     ]  [[xIPAddress]IPAddress]  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] Checking the IPAddress ...  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] IPAddress not correct. Expected 192.168.0.203, actual 192.168.0.205  VERBOSE: [ComputerName]: LCM:  [ End    Test     ]  [[xIPAddress]IPAddress]  in 0.0310 seconds.  VERBOSE: [ComputerName]: LCM:  [ Start  Set      ]  [[xIPAddress]IPAddress]  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] Checking the IPAddress ...  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] IPAddress not correct. Expected 192.168.0.203, actual 192.168.0.205  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] Setting IPAddress ...  VERBOSE: [ComputerName]:                            [[xIPAddress]IPAddress] Instance DefaultGateway already exists  VERBOSE: [ComputerName]: LCM:  [ End    Set      ]  [[xIPAddress]IPAddress]  in 0.0620 seconds.  PowerShell DSC resource MSFT_xIPAddress  failed to execute Set-TargetResource functionality with error message: Can not set or find valid IPAddress using  InterfaceAlias Ethernet and AddressFamily IPv4  + CategoryInfo          : InvalidOperation: (:) [], CimException  + FullyQualifiedErrorId : ProviderOperationExecutionFailure  + PSComputerName        : ComputerName.domain.com    The SendConfigurationApply function did not succeed.  + CategoryInfo          : NotSpecified: (root/Microsoft/...gurationManager:String) [], CimException  + FullyQualifiedErrorId : MI RESULT 1  + PSComputerName        : ComputerName.domain.com    VERBOSE: Operation 'Invoke CimMethod' complete.  VERBOSE: Time taken for configuration job to complete is 0.268 seconds  

... is thrown when applying the following xNetworking DSC configuration:

Import-DscResource -Module xNetworking    Node $NodeFQDN {  xIPAddress IPAddress {      InterfaceAlias = "Ethernet"      IPAddress = $IPv4      AddressFamily = "IPV4"      DefaultGateway = '192.168.0.1'      SubnetMask = 24  }}  

where $IPv4 = '192.168.0.203'.

I have noticed that the Local Configuration Manager is capable of Test-DSCConfiguration, only unable to apply any IP related changes. I have tested this by running the configuration above on the system while the IP is already correctly set.

The message "Can not set or find valid IPAddress using InterfaceAlias Ethernet and AddressFamily IPv4" is confusing since the LCM has obviously been able to find the the adapter during the Test-DSCConfiguration operation.

Any clues as to why the Local Configuration Manager is unable to apply the configuration? What am I not seeing?

SOlr 4.10.2 500 Internal Server Error Error: {msg=SolrCore 'collection1' is not available due to init failure: Index locked for write

Posted: 15 Aug 2021 03:03 PM PDT

I have a SOLR Master and a Slave running. After upgrading to SOLR 4.10.2, and fixing all other errors, I cannot get pass this one:

RSolr::Error::Http - 500 Internal Server Error Error: {msg=SolrCore 'collection1' is not available due to init failure: Index locked for write for core collection1,trace=org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: Index locked for write for core collection1 at org.apache.solr.core.CoreContainer.getCore

I have:

  • stopped jetty on both master/slave,
  • remove the write.lock file from both machines,
  • restart slave,
  • restart master.

The issue persists.

I have also tried other solutions, like changing the following into the solrconfig.xml:

<unlockOnStartup>true</unlockOnStartup>  

This caused different errors, so I rolled back to (the above part is now commented out.

I have compared the configuration files with an environment that works and they look identical.

Thank you.

Allow Google apps and block consumer Google accounts using squid proxy

Posted: 15 Aug 2021 07:07 PM PDT

In my organisation, I am trying to allow the Google apps account and block consumer Google accounts using squid proxy. According to this link, Google says we can do it using following steps:

    1. Route all traffic outbound to google.com through your web proxy server(s).        2. Enable SSL interception on the proxy server.        3. Since you will be intercepting SSL requests, you will need to configure every         client device to trust your SSL proxy by deploying the Internal Root Certificate         Authority used by the proxy and marking it as trusted.        4. For each google.com request:              a. Intercept the request.              b. Add the HTTP header X-GoogApps-Allowed-Domains, whose value is a               comma-separated list with allowed domain name(s). Include the domain you               registered with Google Apps and any secondary domains you might have               added.  

After referring few online blogs and guides I compiled and installed the squid and added the following entries in my squid.conf:

http_port 3128 intercept  http_port 3129  https_port 3130 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB   acl localnet1 dstdomain .google.com  ssl_bump server-first localnet1  always_direct allow localnet1  sslproxy_cert_error allow all  sslproxy_flags DONT_VERIFY_PEER  request_header_add X-GoogApps-Allowed-Domains "mydomain.com" localnet1  cert=/usr/local/squid/cert/server.crt key=/usr/local/squid/cert/server.key  sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /usr/local/squid/var/ssl_db -M 20MB  sslcrtd_children 100  

Using above configuration every request (http and https) is routing through my proxy server but it is not able to block consumer Google account and I am able to login to it.

I have also added the proxy IP as my gateway to the node system and in my proxy server I added following rules in Iptable

iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128  iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130  iptables -I INPUT -p tcp -m tcp --dport 3130 -j ACCEPT  

So what more I need to do to block consumer Google account? Am I missing something here?

EDIT: After working on the above issue I came to know that I was doing one mistake. My port setting in squid.conf file is like follows:

http_port 3128 intercept  http_port 3129  https_port 3130 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB   

I had set the global proxy in my node system. In IP field I had put proxy server's IP and in port field I had put 3129. So, all of my requests were going through 3129 port and hence it was not getting intercepted and was able to login into consumer Google accounts. SO I removed the proxy settings from node system and only kept proxy server ip as it's gateway. After this my every request is reaching the proxy server but I think it's not getting routed to ports specified in squid.conf. i.e 80 port to 3128 and 443 port to 3130, and now because of this everything is blocked.

I have tried to set rules in Iptables for this internal routing of ports but nothing is working. I have only one Ethernet interface as eth0 to my proxy server. So will anybody guide me on this issue?

Struggling with Haproxy 1.5 ACLs using regular expressions and URL Parameters

Posted: 15 Aug 2021 07:07 PM PDT

I am using Haproxy 1.5.3 setup with ssl on the frontend and also sending ssl to the backend servers. the mode is http and using acls to determine stickyness.

My test requests are as follows:

  1. wget https: //domain.com/ping?IPT=transpor6t&FROM_ADDRESS=409
  2. wget https: //domain.com/ping?FROM_ADDRESS=409&IPT=transport6
  3. the real url will have 5 different parameters and the FROM_ADDRESS will be the 3rd Parameter.

I need to create sticky requests on this 3rd Parameter and there seems to be many ways to do this using Haproxy and even though using regex is expensive it provides us the most flexibility, so that is what we chose (in this example we moved away from regex to simplify the problem and decided just to look at the last character of the parameter, so to make sure the regex is not the problem.

Our ACL setup (more of a test bed to see if we can get it to work)

acl block_1 urlp_end(FROM_ADDRESS) 0  acl block_2 urlp_end(FROM_ADDRESS) 9       use_backend block_1_hosts if block_1     use_backend block_2_hosts if block_2    backend block_1_hosts      option httpchk GET /ping      server s1 s1.domain.com:443 weight 1 maxconn 2000 check ssl verify none inter 2000      server s2 s2.domain.com:443 weight 1 maxconn 2000 check ssl verify none inter 2000 backup      backend block_2_hosts      option httpchk GET /ping      server s1 s1.domain.com:443 weight 1 maxconn 2000 check ssl verify none inter 2000 backup      server s2 s2.domain.com:443 weight 1 maxconn 2000 check ssl verify none inter 2000  

With the testing we have done we believe that only the first parameter it finds in the URL can be matched (it does not search the rest of the parameters). This may be a bug or maybe by design (the docs seem a little ambiguous around urlp at least to us) but it would make sense that you should be able to match all parameter in the URL.

**Test1 - FROM_ADDRESS in the second parameter position fails**  wget https://domain.com/ping?IPT=transpor6t&FROM_ADDRESS=409  haproxy logs:  5.35.250.77:41464 [22/Aug/2014:14:20:49.783] https-in~ https-in/<NOSRV> -1/-1/-1/-1/12 503 212 - - SC-- 0/0/0/0/0 0/0 "GET /ping?IPT=transpor6t HTTP/1.0"    **Test2 - FROM_ADDRESS in the first parameter position passes**  wget https://domain.com/ping?FROM_ADDRESS=409&IPT=transport6  haproxy logs:  5.35.250.77:41465 [22/Aug/2014:14:21:33.763] https-in~ block_2_hosts/rs6 12/0/2/2/16 200 229 - - ---- 0/0/0/0/0 0/0 "GET /ping?FROM_ADDRESS=409 HTTP/1.0"  

Shouldn't they both pass with this ACL? Any thoughts? Many thanks, Andre

Group policy configuration error - Server Essentials 2012

Posted: 15 Aug 2021 05:01 PM PDT

I am trying to use the "Implement Group Policy" wizard in Windows Server 2012 Essentials.

I have a domain created and a computer included in that domain. When I choose to "Implement Group Policy" I select "all" in the Enable Folder Redirection Group Policy and also select "Windows Update", "Windows Defender" and "Network Firewall".

I finish the wizard and get the error

Group Policy Configuration Did Not Succeed

Group policy configuration encountered an error. Restart the wizard and try again.

The user's machine is Windows 8 Pro. I have checked the event logs and for .log files and cannot find anything that helps.

I have also tried selecting no folders to redirect and different combinations of the "Security Policy Settings"

Can anyone here offer some guidance as to why this is failing.

Thanks

Pat

ClearOS SMTP Server Setup using Gmail SMTP

Posted: 15 Aug 2021 05:01 PM PDT

How to set up ClearOS SMTP server using gmail SMTP? I'm using ClearOS as IMAP mail server. Receiving mails from pop hosting is no problem. But to setup SMTP for client using the same server is a challenge. Anybody knows how to use Google mail account as an SMTP server for ClearOS? Thank you.

How to manage hotspot web-filtering, centrally, for several hotspots?

Posted: 15 Aug 2021 08:01 PM PDT

I manage a number of public hotspots, at different sites, with routers running the dd-wrt firmware and I now want to (centrally) control the websites they have access to. So, my idea initially was to implement Squid as a transparent proxy (using iptables to forward router traffic) and set it up for filtering only. The only problem with this (if I understand it correctly?) is the Squid server will have to have sufficient bandwidth to handle both outbound (from routers) and inbound traffic (to routers) - the server will be remote to the routers, on the internet. I have the following server restrictions:

  • must be cloud-based, on the internet
  • as low as possible bandwidth
  • simple (quick) to implement solution
  • solution must be scalable (as routers are added)

My first question: is it possible to configure Squid to only intercept, and filter, the outbound data from the routers and allow the inbound traffic to go directly back to the routers, from the websites they requested?

Please note: I have considered using a Captive Portal solution but this will take longer, than I have time for, to implement and will have the same traffic problem!? I have also looked at OpenDNS for filtering, but the logging is not realtime - good, realtime logging is important for me.

Any suggestions on how this can be done using Squid, or any other relevant solutions, would be appreciated...

Random Connections to MySQL refused (Error 111)

Posted: 15 Aug 2021 04:03 PM PDT

A Perl/CGI webapp that has been running fine for almost a year has started to randomly been unable to connect to a remotely hosted MySQL. The Error thrown is :

Can't connect to MySQL server on 'xx.x.xxx.xx' (111)

Reloading the page often solves the problem The client is using Perl, DBI and SSL to connect to MySQL using the same configuration file each time.

MySQL 5.0 Server Running RH EL5

  • Quad-Core AMD Opteron(tm) Processor 2374 HE, 8 cores
  • Real Memory: 15.73 GB total, 11.81 GB used
  • allows networking in my.cnf

  • max-connections are not being met

  • load is low.

  • The servers firewall is open to the client's subnet.

  • The mysql user has permissions from the client's subnet.

I have my host looking into the problem but so far we're all stumped as to way the occasional connection is (increasingly getting refused)

Any advice what to check that would cause the random refusal of connections?

Can I host a VPN service on a shared Windows Host I already have?

Posted: 15 Aug 2021 10:06 PM PDT

I have a windows hosting plan with Plesk control panel, FTP, email and other popular options already found in common windows hosting plans. I want to know whether I can convert my hosting plan to a L2TP/IPSec or PPTP VPN service or not. Your answers are really appreciated.

No comments:

Post a Comment