Monday, May 9, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


What do the numbers mean in 0/24, 0/16 and 0/8 when blocking ranges of IP addresses

Posted: 09 May 2022 05:57 AM PDT

From the answer in the post Block range of IP Addresses

To block 116.10.191.* addresses:

$ sudo iptables -A INPUT -s 116.10.191.0/24 -j DROP

To block 116.10.. addresses:

$ sudo iptables -A INPUT -s 116.10.0.0/16 -j DROP

To block 116...* addresses:

$ sudo iptables -A INPUT -s 116.0.0.0/8 -j DROP

I'd like to get a better understanding of the meaning and use of the numbers 0,8,16,24.

I already know that they are how you create a rule that blocks a range of IP addresses but I don't understand how/why this is a representation of a range.

Nginx proxy remove spesific path and emty Post request body

Posted: 09 May 2022 05:56 AM PDT

I'm using nginx for web service proxy. I have rest service as below and i want to proxy my domain with suburi

https://www.example.com/myRestservice  

Service has some method like this;

http://1.1.1.1:123/api/work/method1  http://1.1.1.1:123/api/work/method2  

As result i want to access to methods of service like

https://www.example.com/Restservice/api/work/method1..  

When i try to use rewrite in nginx as below, i can access service. But in this time Post method's request body is emty. I can see service logs.

In my nginx.config

upstream RestService {     server 1.1.1.1:123;     server 1.1.1.2:123;  }  server {         listen                443 ssl;         server name           https://www.example.com;    location ~ ^/Restservice/ {                 add_header Access-Control-Allow-Origin *;     rewrite ^/Restservice/(.*) /$1 break;     proxy_pass http://Restservice/;     proxy_http_version  1.1;  }  }  

I try to location part like this, result is same.

 location   /Restservice {           proxy_pass http://Restservice/;  }  

In nginx access log;

status : 500 request: POST /Restservice/api/work/method1 HTTP/1.1  

backup strategy for pc [migrated]

Posted: 09 May 2022 05:31 AM PDT

im thinking about a backup strategy for the follow scenario:

W11 machine with 2 Disks:

onboard SSD 256 + some cheap HDD

the goal is to create in AUTOMATIC mode, a full booteable disk every sunday from SSD to HDD, that permits:

  • boot from the image (HDD) (in case of SSD fatal damage)
  • recover files and moving manually from HDD to SSD (in case of indesirable file changes)

i also admit more ideas of course.

regards

unable to create a project azure devops 2019 on premises

Posted: 09 May 2022 04:27 AM PDT

I have azure DevOps server on-premises, and I want to create a new project but I can't. even though I'm using the admin account. I can create a new collection, but I can't create new projects, here's the log when I check it

++ Executing - Operation: ProjectCreate, Group: ProjectCreate.TfsTeamBuild  [09:35:38.947] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++  [09:35:38.947] Executing step: Create the Team Project  [09:35:38.947]   Executing step: 'Create the Team Project' Build.CreateTeamProject (7 of 12)  [09:38:43.323]   [Error] The file exists.  [09:38:43.427]   System.IO.IOException: The file exists.  [09:38:43.427]      at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)  [09:38:43.427]      at System.IO.Path.InternalGetTempFileName(Boolean checkHost)  [09:38:43.427]      at Microsoft.TeamFoundation.Build.Server.ProcessTemplate.UpdateCachedProcessParameters(IVssRequestContext requestContext, VersionSpec versionSpec)  [09:38:43.427]      at Microsoft.TeamFoundation.Build.Server.TeamFoundationBuildService.AddProcessTemplates(IVssRequestContext requestContext, IList`1 processTemplates)  [09:38:43.427]      at Microsoft.TeamFoundation.Build.Server.TeamFoundationBuildService.CreateBuiltInProcessTemplates(IVssRequestContext requestContext, String teamProjectUri, Boolean isUpgrade)  [09:38:43.427]      at Microsoft.TeamFoundation.Build.Server.TeamFoundationBuildService.CreateTeamProject(IVssRequestContext requestContext, String projectUri, IList`1 permissions)  [09:38:43.427]      at Microsoft.TeamFoundation.Server.Servicing.TFCollection.BuildStepPerformer.CreateTeamProject(ServicingContext servicingContext)  [09:38:43.427]      at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformHostStep(String servicingOperation, ServicingOperationTarget target, IServicingStep servicingStep, String stepData, ServicingContext servicingContext)  [09:38:43.427]      at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformStep(String servicingOperation, ServicingOperationTarget target, String stepType, String stepData, ServicingContext servicingContext)  [09:38:43.427]      at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)  [09:38:43.427] Step failed: Create the Team Project. Execution time: 3 minutes and 4 seconds.  [09:38:43.427]   [StepDuration] 184.3850653  [09:38:43.427]   [GroupDuration] 184.4857136  [09:38:43.777]   [OperationDuration] 204.5521056  [09:38:43.787]   Clearing dictionary, removing all items  

But there's no existing file the project name is unique. any kind of support it'll be appreciated thanks & regards

Apache Proxypass redirects "localhost:port" as url string instead of local service of the port

Posted: 09 May 2022 03:47 AM PDT

Environment

Server version: Apache/2.4.6 (CentOS)


I have two servers which are almost duplicates.

aaa.com. and bbb.com.

They have almost same Apache rulesets.

aaa.com. config

<Location "/serviceEndpoint/">    ProxyPass http://localhost:8100/serviceEndpoint/    ProxyPassReverse http://localhost:8100/serviceEndpoint/  </Location>  <Location "/fruit/apple">    ProxyPass "/fruit/apple" "http://localhost:8100/serviceEndpoint/fruit/apple"    ProxyPassReverse "/fruit/apple" "http://localhost:8100/serviceEndpoint/fruit/apple"  </Location>  

So /serviceEndpoint is a service using 8100 port, and /fruit/apple is a servlet of it.

bbb.com. config

<VirtualHost _default_:80>    ProxyPass "/serviceEndpoint/" "http://localhost:20100/serviceEndpoint/"    ProxyPassReverse "/serviceEndpoint/" "http://localhost:20100/serviceEndpoint/"      ProxyPass "/fruit/apple" "http://localhost:20100/serviceEndpoint/fruit/apple"    ProxyPassReverse "/fruit/apple" "http://localhost:20100/serviceEndpoint/fruit/apple"  </VirtualHost>  

Looks the same, but it's inside VirtualHost:80, if that makes anything different.
(*edit I tested using the same config, but the result was same)

Problem

Both aaa.com/fruit/apple or bbb.com/fruit/apple works well.

But, when the service use response.sendRedirect() and redirects the browser to /fruit/apple,
only aaa.com. works and bbb.com. tries to connect literal http://localhost:20100/fruit/apple from the client browser.

aaa.com redirect response header

HTTP/1.1 302  Date: Mon, 09 May 2022 08:01:29 GMT  Server: Apache  X-Frame-Options: SAMEORIGIN  Strict-Transport-Security: max-age=63072000; includeSubDomains  Location: /fruit/#!/some_controller  Content-Length: 0  Set-Cookie: JSESSIONID=4EA61F0E6031621E540DBDC9F6C54D64; Path=/serviceEndpoint; HttpOnly  Set-Cookie: JSESSIONID=4EA61F0E6031621E540DBDC9F6C54D64; Secure; HttpOnly; SameSite=Strict  X-XSS-Protection: 1; mode=block  Keep-Alive: timeout=15, max=95  Connection: Keep-Alive  

bbb.com redirect response header

HTTP/1.1 302  Date: Mon, 09 May 2022 08:01:29 GMT  Server: Apache-Coyote/1.1  X-Frame-Options: SAMEORIGIN  Strict-Transport-Security: max-age=63072000; includeSubDomains  Location: http://localhost:20100/fruit/#!/some_controller  Content-Length: 0  Set-Cookie: JSESSIONID=4EA61F0E6031621E540DBDC9F6C54D64; Path=/serviceEndpoint; HttpOnly  Set-Cookie: JSESSIONID=4EA61F0E6031621E540DBDC9F6C54D64; Secure; HttpOnly; SameSite=Strict  Keep-Alive: timeout=15, max=95  Connection: Keep-Alive  

Question

From Apache settings, what can cause this behavior and how should I fix this?

setting up ntop with ubuntu 22.04 with system-ctl

Posted: 09 May 2022 02:47 AM PDT

I've installed ntop using the official docs but used the ubuntu 20.04 version of the package as there's no package for the latest version of ubuntu(22.04). After installation it doesn't generate the ntopng.conf in /etc/init.d/ntopng and same goes for /etc/ntopng/.

When is run systemctl list-unit-files --type service -all to see what services are running/installed in the list ntop isn't there and there was no errors during the install.

Is it possible to run ntop on the latest ubuntu release(22.04)? Anybody encountering issues installing ntop?

Sender dependent relay with transport maps in postfix

Posted: 09 May 2022 03:13 AM PDT

We have some SMTP server in our company. We have transport maps in which we define what domains send to other smtp server. Now, we need sender dependent config for some users. The users are using same domain as in the transport maps table.

I tried sender_dependent_relayhost_maps but It is not working when transport maps enabled.

Idea, what I want:

All @companydomain.ltd forward to smtp in transport_maps table:

companydomain.tld            smtp:[smtp-server.tld]  

exception (forward some users mail to another smtp):

user@companydomain.tld       [another-smtp-server.tld]  user2@companydomain.tld      [another-smtp-server.tld]  

Is there any sender dependent setting that overwrites the transport_maps?

KVM: Best performance for all guests

Posted: 09 May 2022 05:24 AM PDT

With KVM, what is the best way to provide the highest possible performance to all VMs?

The host has a hexa-core processor and 64GB of ram. 3-4 VMs should run on it.

The VMs are idle a lot of the time, but during performance peaks they should preferably have the full performance of the host available.

Is it a good idea to give all VMs 6 cores and 64GB of ram? Or what would make the most sense?

WSL-Docker: curl: (60) unable to get local issuer certificate

Posted: 09 May 2022 05:33 AM PDT

After a PC reconfiguration I am unable to use Docker properly, since some curl commands are rejected due to SSL/TLS issues.

In just one example curl -vfsSL https://apt.releases.hashicorp.com/gpg returns the following error:

*   Trying 52.222.214.125:443...  * TCP_NODELAY set  * Connected to apt.releases.hashicorp.com (52.222.214.125) port 443 (#0)  * ALPN, offering h2  * ALPN, offering http/1.1  * successfully set certificate verify locations:  *   CAfile: /etc/ssl/certs/ca-certificates.crt    CApath: /etc/ssl/certs  * TLSv1.3 (OUT), TLS handshake, Client hello (1):  * TLSv1.3 (IN), TLS handshake, Server hello (2):  * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):  * TLSv1.3 (OUT), TLS handshake, Client hello (1):  * TLSv1.3 (IN), TLS handshake, Server hello (2):  * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):  * TLSv1.3 (IN), TLS handshake, Certificate (11):  * TLSv1.3 (OUT), TLS alert, unknown CA (560):  * SSL certificate problem: unable to get local issuer certificate  * Closing connection 0  curl: (60) SSL certificate problem: unable to get local issuer certificate  More details here: https://curl.haxx.se/docs/sslcerts.html    curl failed to verify the legitimacy of the server and therefore could not  establish a secure connection to it. To learn more about this situation and  how to fix it, please visit the web page mentioned above.  

After some digging, I now now know that this issue also occurs within my WSL image, but not on host Windows OS. Hence, I believe this must be an issue that originates with my WSL setup, and not caused by Docker itself (?).

There are quite a few related questions on serverfault/stackoverflow but no solutions I found really apply to this case:

FWIW I work at an enterprise, with IT-issued OS. Obviously that could be a source of error, but they are unable to help me debug this issue. One a colleague's PC, however, it works flawlessly.

Any ideas?


PC Setup:

  • Windows 10 Enterprise
    • Version: 21H1
    • OS build: 19043.1645
    • Windows Feature Experience Pack: 120.2212.4170.0
  • WSL 2 with Ubuntu-20.04
  • Docker Desktop 4.7.1 (77678) with WSL 2 based engine

Update 1

As suggested by @Martin, I tried downloading https://www.amazontrust.com/repository/AmazonRootCA1.pem, put it inside /tmp in WSL Ubuntu, and reran the command curl --cacert /tmp/AmazonRootCA1.pem -vfsSL https://apt.releases.hashicorp.com/gpg to no avail:

curl --cacert /tmp/AmazonRootCA1.pem -vfsSL https://apt.releases.hashicorp.com/gpg  *   Trying 52.222.214.72:443...  * TCP_NODELAY set  * Connected to apt.releases.hashicorp.com (52.222.214.72) port 443 (#0)  * ALPN, offering h2  * ALPN, offering http/1.1  * successfully set certificate verify locations:  *   CAfile: /tmp/AmazonRootCA1.pem    CApath: /etc/ssl/certs  * TLSv1.3 (OUT), TLS handshake, Client hello (1):  * TLSv1.3 (IN), TLS handshake, Server hello (2):  * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):  * TLSv1.3 (OUT), TLS handshake, Client hello (1):  * TLSv1.3 (IN), TLS handshake, Server hello (2):  * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):  * TLSv1.3 (IN), TLS handshake, Certificate (11):  * TLSv1.3 (OUT), TLS alert, unknown CA (560):  * SSL certificate problem: unable to get local issuer certificate  * Closing connection 0  curl: (60) SSL certificate problem: unable to get local issuer certificate  More details here: https://curl.haxx.se/docs/sslcerts.html    curl failed to verify the legitimacy of the server and therefore could not  establish a secure connection to it. To learn more about this situation and  how to fix it, please visit the web page mentioned above.  

docker swarm - highly available database

Posted: 09 May 2022 04:34 AM PDT

I am using high availability on two servers, where I use docker swarm with two manager nodes (one is the leader) with their respective applications (backend and frontend) and I use haproxy to redirect to a single IP.

I have a problem with the database with its data persistence, when I want to save data it is only saved in one and not in both.

What advice would you give me to solve this problem?

Ubuntu 22.04 - Intel X550 - Run a pre-up command to set ethtool network interface parameters at startup

Posted: 09 May 2022 02:22 AM PDT

Following the question at: https://askubuntu.com/questions/1406445/ubuntu-22-04-server-intel-x550-advertised-speed-not-correct I need to run a command at startup so that the necessary changes are permanent. From the Intel drivers documentation (https://downloadmirror.intel.com/727507/readme.txt), the command is:

pre-up ethtool -s <ethX> advertise 0x1800000001028 || true  

I tried to write the command in various files, without success.

Where should I write it ?

Edit 1: Following Anton Danilov's answer, I did not find how to set up a solution with sytemd.link, but I fond a different way by creating a service in

/etc/systemd/system  

as explained here: https://bbs.archlinux.org/viewtopic.php?id=262075

It works, but is it the optimal way for Ubuntu 22.04 (the link above is a solution for archlinux and is from 2018) ? If no, could someone please provide a complete working solution for Ubuntu Server 22.04 ?

No response from ps aux | grep apache [closed]

Posted: 09 May 2022 03:23 AM PDT

When I run (in a bash terminal VS Code):

ps aux | grep apache

I get no response. I am running XAMPP and have this config:

enter image description here

I was expecting it to show me the Apache system user name but I get nothing. I did this a week or so ago and got a full list.

Any ideas?

How do you resolve to both public and private zones in a Split-Horizon DNS (using GCP Cloud DNS)?

Posted: 09 May 2022 02:05 AM PDT

We're using GCP and Cloud DNS to manage our domain and I'm trying to solve for these use cases:

  1. Have private records for things like Databases that can only be resolved within the company network (our VPC).
  2. Override public records with private IPs for alternative routing within the company network.
  3. Be able to issue DNS01 challenges and resolve the records within our network and publically. We need this due to how cert-manager works (which we use to issue certificates with letsencrypt).

I've tried solving this with a public and private zone (AKA, split-horizon DNS), however, this solution only solves use cases 1 and 2. And it only solves use case 2 if we ensure the private zone has a copy of all records in the public zone (if there isn't a private counterpart).

Use case 3 isn't met with this solution as our cert-manager server creates the records in the public zone and then cannot resolve them in the public zone. Due to the specifics of our setup, customing cert-manager to resolve both zones via some local configuration isn't ideal. It also would be difficult to have the records created on both zones, so again not the ideal solution.

What I'd like is for the private zone to forward requests to the public one if it doesn't have a record for a specific request. Is there a way of doing this, specifically using GCP Cloud DNS?

The ideal nslookup -> private zone -> public zone

Currently we have nslookup -> private zone -> error (NXDOMAIN) if no record

For example,

# While on my laptop  > nslookup ws1.example.com  ...  Name:   ws1.example.com  Address: 34.111.111.111           # Public IP for web server    # While on the GCP network  > nslookup db.example.com  ...  Name:   db.example.com  Address: 10.10.0.2                # Private IP for a database  > nslookup ws1.example.com  ...  Name:   ws1.example.com  Address: 10.0.0.10                # Private IP (from private zone) for web server  

This works fine for use cases 1 and 2 but when we try to resolve a record that only exists on the public zone...

# While on my laptop  > nslookup ws1.example.com  ...  Name:   ws1.example.com  Address: 34.111.111.111           # Public IP for web server  > nslookup ws2.example.com        # We only have this record in the public zone  ...  Name:   ws2.example.com  Address: 34.111.111.112           # Public IP for another web server    # While in the GCP VPC  > nslookup ws1.example.com  ...  Name:   ws1.example.com  Address: 10.0.0.1                 # Private IP (override) for web server  > nslookup ws2.example.com        # We only have this record in the public zone  ...  ** server can't find ws2.example.com: NXDOMAIN    # Fails to resolve. Should look at private then public zone and resolve to 34.111.111.112.  

Any suggestions?

As a workaround, for now, we've switched to using HTTP01 challenges for cert-manager but we'd prefer to use DNS01 if possible.

Setup postfix exclusively for local delivery

Posted: 09 May 2022 03:44 AM PDT

For development purposes I wanted an smtp server, that simply places all mails into a local mailbox. To achieve this, I tried to setup a minimal postfix system.

# master.cf  smtp      inet       n  - n -     - smtpd  cleanup   unix       n  - n -     0 cleanup  qmgr      unix       n  - n 300   1 qmgr  rewrite   unix       -  - n -     - trivial-rewrite  bounce    unix       -  - n -     0 bounce  defer     unix       -  - n -     0 bounce  trace     unix       -  - n -     0 bounce  verify    unix       -  - n -     1 verify  error     unix       -  - n -     - error  retry     unix       -  - n -     - error  discard   unix       -  - n -     - discard  local     unix       -  n n -     - local  scache    unix       -  - n -     1 scache  proxymap  unix       -  - - -     1 proxymap  postlog   unix-dgram n  - n -     1 postlogd  
# main.cf  compatibility_level = 3.7  queue_directory = /var/spool/postfix  command_directory = /usr/bin  daemon_directory = /usr/lib/postfix/bin  data_directory = /var/lib/postfix  mail_owner = postfix  inet_protocols = ipv4  unknown_local_recipient_reject_code = 550    mydestination = localhost  alias_maps = regexp:{{/.*/ mytargetuser@localhost}}  alias_database = $alias_maps  

Talking to smtpd is no problem. I get successful responses through the entire conversation, however in the end, postfix tries to use smtp to deliver the mail, which is not enabled:

postfix/smtpd: connect from myhost.mydomain[127.0.0.1]  postfix/smtpd: 8D548E40850: client=myhost.mydomain[127.0.0.1]  postfix/cleanup: 8D548E40850: message-id=<20220506145639.8D548E40850@myhost>  postfix/qmgr: 8D548E40850: from=<nobody@example.org>, size=408, nrcpt=1 (queue active)  postfix/qmgr: warning: connect to transport private/smtp: Connection refused  postfix/error: 8D548E40850: to=<idontcare@no.where>, relay=none, delay=30, delays=30/0/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable)  

Any clue, why alias_maps is not working as I indended to use it here?

Allow Bitbucket pipeline runner to access ports other than 80 and 443 such as NFS

Posted: 09 May 2022 02:59 AM PDT

I am trying to persuade a bitbucket pipeline that employs a runner on our GKE based infrastructure to mount a directory from a GCE VM via NFS.

It seems that the outgoing traffic to NFS ports is blocked.

The pipeline has no problem accessing port 443 on an external web server.

No incoming traffic is visible to tcpdump on the NFS server.

Firewall logs show nothing being blocked.

The underlying runner containers (bb-k8s-runner and docker-in-docker) are both able to mount NFS directories from the server.

I'm wondering if anybody has a tip to enable the pipeline to access files on the NFS server.

What is the difference between 0.0.0.0/0 and 0.0.0.0/1?

Posted: 09 May 2022 05:18 AM PDT

In the history, I mostly used 0.0.0.0/0 for "match every IP address". Recently, I saw a 0.0.0.0/1 subnet filter.

What is the difference between 0.0.0.0/0 and 0.0.0.0/1 and what's the practical use of 0.0.0.0/1?

SELinux preventing mongod search access

Posted: 09 May 2022 05:32 AM PDT

I noticed I am getting some SELinux errors when running mongod for the UniFi controller program. Namely, I am getting:

SELinux is preventing /usr/bin/mongod from search access on the directory /.    SELinux is preventing /usr/bin/mongod from search access on the directory /var/lib/nfs    SELinux is preventing /usr/bin/mongod from search access on the directory fs    SELinux is preventing /usr/bin/mongod from search access on the directory /var/lib/snapd    

I don't see any reason as to why mongod needs search access to any of these directories and I am wondering if/how I can disable it trying to search them and I don't think giving it access to my entire system is really a solution.

The actual database is stored, it is in the default location (config file below) and the SELinux types are set correctly for those directories as the service does seem to run and no errors are thrown about accessing /var/lib/mongo.

# mongod.conf                                                                                                                                                       # for documentation of all options, see:                                          #   http://docs.mongodb.org/manual/reference/configuration-options/                                                                                                 # where to write logging data.                                                    systemLog:                                                                          destination: file                                                                 logAppend: true                                                                   path: /var/log/mongodb/mongod.log                                                                                                                                 # Where and how to store data.                                                    storage:                                                                            dbPath: /var/lib/mongo                                                            journal:                                                                            enabled: true                                                                 #  engine:                                                                        #  wiredTiger:                                                                                                                                                      # how the process runs                                                            processManagement:                                                                  fork: true  # fork and run in background                                          pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile                   timeZoneInfo: /usr/share/zoneinfo                                                                                                                                 # network interfaces                                                              net:                                                                                port: 27017                                                                       bindIp: 127.0.0.1  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.  

How to configure nginx.conf and php-fpm using brew servinces in MAC-OS in order to run php?

Posted: 09 May 2022 04:05 AM PDT

I have following logs in php-fpm log .

[19-Jan-2020 15:09:33] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root  [19-Jan-2020 15:09:33] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root  [19-Jan-2020 15:09:33] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root  [19-Jan-2020 15:09:33] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root  [19-Jan-2020 15:09:33] NOTICE: fpm is running, pid 11067  [19-Jan-2020 15:09:33] NOTICE: ready to handle connections  

Here is Nginx.conf file :

#user  nobody;  worker_processes  1;    #error_log  logs/error.log;  #error_log  logs/error.log  notice;  #error_log  logs/error.log  info;    #pid        logs/nginx.pid;      events {      worker_connections  1024;  }      http {      include       mime.types;      default_type  application/octet-stream;        #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '      #                  '$status $body_bytes_sent "$http_referer" '      #                  '"$http_user_agent" "$http_x_forwarded_for"';        #access_log  logs/access.log  main;        sendfile        on;      #tcp_nopush     on;        #keepalive_timeout  0;      keepalive_timeout  65;        #gzip  on;        server {          listen       8888;          server_name  localhost;            #charset koi8-r;            #access_log  logs/host.access.log  main;            location / {              root   html;              index  index.html index.htm index.php;          }            #error_page  404              /404.html;            # redirect server error pages to the static page /50x.html          #          error_page   500 502 503 504  /50x.html;          location = /50x.html {              root   html;          }            # proxy the PHP scripts to Apache listening on 127.0.0.1:80          #          location ~ \.php$ {              proxy_pass   http://127.0.0.1;          }            # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000          #          location ~ \.php$ {              root           html;              fastcgi_pass   127.0.0.1:9000;              fastcgi_index  index.php;             fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;             # fastcgi_param  SCRIPT_FILENAME $document_root/$fastcgi_script_name;                include        fastcgi_params;          }            # deny access to .htaccess files, if Apache's document root          # concurs with nginx's one          #          location ~ /\.ht {              deny  all;          }      }          # another virtual host using mix of IP-, name-, and port-based configuration      #      #server {      #    listen       8000;      #    listen       somename:8080;      #    server_name  somename  alias  another.alias;        #    location / {      #        root   html;      #        index  index.html index.htm;      #    }      #}          # HTTPS server      #      #server {      #    listen       443 ssl;      #    server_name  localhost;        #    ssl_certificate      cert.pem;      #    ssl_certificate_key  cert.key;        #    ssl_session_cache    shared:SSL:1m;      #    ssl_session_timeout  5m;        #    ssl_ciphers  HIGH:!aNULL:!MD5;      #    ssl_prefer_server_ciphers  on;        #    location / {      #        root   html;      #        index  index.html index.htm;      #    }      #}      include servers/*;  }  

I am trying to run phpinfo() function , Its gives me error 502 . Notes: I am using php@7.2 and php-fpm 7.2 , I already changed user and group in php-fpm.conf file , Any help will be appreciated . Thanks in advance .

Azure AD Connect change sync key userprincipalname to mail attribute

Posted: 09 May 2022 05:38 AM PDT

What is the recommended way to change the sync attribute from userPrincipalName to mail eg

Option to set USER PRINCIPAL NAME You only get this option when you FIRST install AD connect

As far as I can tell, its disable sync, remove and re-install.

kubernetes dns resolver in nginx

Posted: 09 May 2022 04:36 AM PDT

I was developing locally in docker-compose, and had an nginx container doing a simple proxy_pass like so:

location /app/ {      proxy_pass http://webapp:3000/;        proxy_http_version 1.1;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection 'upgrade';      proxy_set_header Host $host;      proxy_cache_bypass $http_upgrade;        resolver 127.0.0.11;  }  

I now want to move over to kubernetes in GKE, and the last line is giving me troubles.

I tried to switch the resolver to:

    resolver kube-dns;  

I also tried various other IPs and names, but I keep getting an error along the lines of:

nginx: [emerg] host not found in resolver "kube-dns"  

My kubernetes setup is that I have a single pod, with 2 containers: 'webapp' and 'nginx'. I simply want to have an external service pointing to nginx that can proxy_pass to webapp.

Any ideas?

get NS record for domain using dig

Posted: 09 May 2022 04:04 AM PDT

I'm currently trying to write a script that validates, whether a given record attached to a name is available via all nameservers that are responsible for that name.

E.g. I would like to check whether there is an A record for foo.example.com available at all NS entries for the example.com domain (a.iana-servers.net and b.iana-servers.net)

The script works by first querying the NS records for the given name (or its parents if that fails; e.g. since foo.example.com. doesn't have an NS entry, we try example.com. next and finally .com.), and then checking the A record with all nameservers.

 name=foo.example.com   # get the nameservers for ${name}   sname=${name}   until [ -z "x${sname}" ]; do      dns=$(dig +short NS "${sname}")      if [ "x${dns}" != "x" ]; then        break      fi      sname=${sname#*.}  done  # now that we have a list of nameservers in $dns, we query them  for ns in $dns; do      dig +short A "${name}" @$"{ns}"  done  

This kind of works, unless the name is actually a CNAME. In this case, a dig NS will return the CNAME record (rather than the NS record or no record)

$ dig +noall +answer NS foo.example.com  foo.example.com. 300 IN CNAME bar.example.com.  $ dig +short NS foo.example.com  bar.example.com.  $ dig A foo.example.com @bar.example.com  ;; global options: +cmd  ;; connection timed out; no servers could be reached  $  

Instead I would like to have something like:

$ dig +short NS foo.example.com  $ dig +short NS example.com  a.iana-servers.net.  b.iana-servers.net.  $ dig +short A foo.example.com @a.iana-servers.net.  93.184.216.34  $  

So my question is: how can I force dig to only return the NS records, and not some other record which points to a host that is not a nameserver?

One obvious solution is to parse the output of dig +noall +answer to see whether it actually contains an NS record, but this seems rather clumsy and error prone...

Windows 10 Pro: RDP disconnecting every 10 - 30 seconds

Posted: 09 May 2022 05:04 AM PDT

Just looking for some brainstorming help.

I have a (fully updated) Windows 10 Pro desktop which I regularly connect to using RDP from a Mac running Microsoft Remote Desktop (latest version).

The Windows 10 Pro machine is using a static IP on 192.168.1.0/24 network.

When the Mac is on 192.168.1.0/24 as well, I can stay connected to the Windows 10 Pro machine for hours with no problem.

Sometimes I work from another site on 192.168.2.0/24 network. There is a wireless link between both sites. The network path is something like this:

Internet <- NAT <- Site1: 192.168.1.0/24 -> NAT -> 192.168.3.0/29 <- NAT <- Site2: 192.168.2.0/24

Whenever I try to connect to the Win10 PC at Site1 from the Mac at Site2, I can easily and quickly establish an RDP connection, and I can even use the connection just fine for anywhere from 10 - 60 seconds, and then the screen freezes and I get disconnected from the Win10 PC.

You might say, well maybe I have a problem with my wireless link, but a continuous ping from Site2 to Site1 shows no problems with the connection. Even more telling, I have another RDP server running on a Win10 Pro machine, but it is completely offsite and I access it through the Internet at Site1. In other words, from Site2 through Site1 and then out the Internet, I am accessing another RDP server also running Win10, and I can stay connected to that machine for hours on end.

So what is changing from Site1 to Site2 that is causing me lose RDP connection every time I connect? Is it a NAT problem? The weird thing I really don't understand: if I had some critical configuration or network problem, I shouldn't be able to connect to RDP at all - why is it letting me connect without problems, function without problems for about 30 seconds, and then suddenly disconnect me seemingly without reason? It doesn't make sense.

Amazon EC2 ubuntu instance ifconfig does not show interfaces attached by attach_network_interface

Posted: 09 May 2022 05:04 AM PDT

I have launched a c3.xlarge ubuntu instance in a VPC. This instance supports 4 interfaces. I use ec2 python APIs to create_network_interface and attach_network_interface to add eth1, eth2, and eth3. On the AWS console, the instance is up and running. All 4 network interfaces are shown in the AWS console with the correct subnet ID.

When I ssh into the instance, and use "ifconfig" to show the interfaces, only eth0 is shown. If I use "ifconfig -a", I can see eth0-3, but only eth0 has an IP address assign to it.

Am I missing anything? Thanks in advance....


Edit: From the AWS console EC2 dashboard, I clicked on the instance->Description, scroll down to see the "Network interfaces" portion, it shows all eth0, eth1, eth2, and eth3. If I click on eth1 - eth3, they all show the IP address and status like this: Network Interface eth1 Interface ID eni-119f304f VPC ID vpc-873db7e2 Attachment Owner 17xxxxxxxx79 Attachment Status attached Attachment Time Fri Sep 09 10:58:58 GMT-700 2016 Delete on Terminate false Private IP Address 10.31.2.12 Private DNS Name ip-10-31-2-12.us-west-2.compute.internal

Elastic IP Address

Source/Dest. Check true Description cluster001-demux-peer1 Security Groups cluster001-demux

The private IP addresses are created and assigned to those network interfaces from AWS's point of view.

The /etc/network/interfaces shows the normal things: auto lo iface lo inet loopback

# Source interfaces  # Please check /etc/network/interfaces.d before changing this file  # as interfaces may have been defined in /etc/network/interfaces.d  # NOTE: the primary ethernet device is defined in  # /etc/network/interfaces.d/eth0  # See LP: #1262951  source /etc/network/interfaces.d/*.cfg  

How can VM and Docker bridge traffic be routed through a pfSense VM?

Posted: 09 May 2022 02:05 AM PDT

I think this question is a result of me not being able to wrap my head around Docker networking and not being super great at Slackware. It seems like there should be a simple solution; I'm just totally missing it.

I have an UnRAID server (which is built on top of Slackware), and on this server I have some Dockers running as well as a few VMs running via KVM. I have pfSense in one of those VMs, and I would like to route traffic from Docker and other VMs through pfSense.

When creating a VM, UnRAID gives three options by default for choosing a network bridge:

  • br0 - allows a VM to exist as its own entity on the network, with direct access to the LAN and an IP assigned from the router
  • vibr0 - a virtual bridge managed by the host which keeps the VM isolated from the LAN
  • docker0 - Docker's bridge

I figured out that I could add all three of these interfaces to pfSense: assign br0 as the WAN interface, vibr0 and docker0 as LAN interfaces. What I'm stuck on now is how I make the traffic from the two LAN interfaces through the firewall to the WAN. How can I do this?

I have tried a few completely ineffective things, such as setting the IP of the docker0 interface in pfSense to 192.168.2.1 and setting the default gateway in the docker0 bridge configuration to 192.168.2.1, but that doesn't seem to have changed anything. What fundamental aspect am I missing here?

To summarize, I would like to route traffic from the Docker containers and from the other VMs to what pfSense considers to be the the LAN ports; from there it will be routed to my actual LAN through what pfSense considers to be the WAN port. Or: how do I disconnect the vibr0 and docker0 from the host's eth0 interface?

Reduce cron log level with systemd

Posted: 09 May 2022 04:40 AM PDT

Googling for a solution, I only found articles telling me how to do it in old systems and not under a systemd maintained Linux, by changing the cron init-script adding -L parameter to the command line.

I have a cron job that runs every minute. It logs every start and additionall a pam_unix-entry for each session opened and closed for the user running cron. This is a lot of babbeling in the journald log. How can I set the log level in a systemd-environment, that I only get errors and fatalities documented?

NGINX Unicorn 504 Gateway Time-out

Posted: 09 May 2022 04:05 AM PDT

I try all what I found in the Google by this question, but - nothing. It doesn't work anyway.

My NGINX default:

upstream app {      server unix:/tmp/unicorn.rails.sock fail_timeout=0;  }    server {      listen   80;      root /home/rails/public;      server_name _;      index index.htm index.html;        location / {              try_files $uri/index.html $uri.html $uri @app;      }        location ~* ^.+\.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {                      try_files $uri @app;              }         location @app {              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header Host $http_host;              proxy_redirect off;              proxy_pass http://app;  }  }  

NGINX Error log:

    *12 connect() to unix:/tmp/unicorn.myapp.sock failed (2: No such file or directory) while connecting to upstream, client: 46.228.180.65, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/tmp/unicorn.myapp.sock:/", host: "178.62.102.154"  

Can you help to fix it?

/home/rails/config/unicorn.rb

working_directory "/home/rails"  pid "/home/rails/pids/unicorn.pid"  stderr_path "/home/rails/log/unicorn.log"  stdout_path "/home/rails/log/unicorn.log"  listen "/tmp/unicorn.rails.sock"  worker_processes 2  timeout 30  

Displaying a remote SSL certificate details using CLI tools

Posted: 09 May 2022 03:02 AM PDT

In Chrome, clicking on the green HTTPS lock icon opens a window with the certificate details:

enter image description here

When I tried the same with cURL, I got only some of the information:

$ curl -vvI https://gnupg.org  * Rebuilt URL to: https://gnupg.org/  * Hostname was NOT found in DNS cache  *   Trying 217.69.76.60...  * Connected to gnupg.org (217.69.76.60) port 443 (#0)  * TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA  * Server certificate: gnupg.org  * Server certificate: Gandi Standard SSL CA  * Server certificate: UTN-USERFirst-Hardware  > HEAD / HTTP/1.1  > User-Agent: curl/7.37.1  > Host: gnupg.org  > Accept: */*  

Any idea how to get the full certificate information form a command line tool (cURL or other)?

How can I check the partition name in FreeBSD?

Posted: 09 May 2022 03:22 AM PDT

I am currently running my server in the rescue mode, due to the firewall issues. In order to disable the firewall thing I would have to mount the / partition.

My problem is that I dont know/remember what is the partition name to mount. I though that would be the /dev/ada0 (as on my similar server bought at the same time) but there is no such partition:

mount /dev/ada0 /mnt  mount: /dev/ada0: Invalid argument  

OVH web tutorial is saying that its possible to check the partition table via the fdisk -l command - however, wont work on the FreeBSD:

# fdisk -l  fdisk: illegal option -- l  

Is there an other possibility to check the partition table?

Automate mounting a persistant CIFS drive natively on Windows.

Posted: 09 May 2022 03:06 AM PDT

Trying to create a script to automate mounting CIFS shares as drives on windows 2008/2012 server. The share requires a login (Unfortunately, AD can not be used) and needs to be mounted as a persistent drive that survives reboots.

Windows allows below

net use x: \\10.243.212.19\demo_nas_share /USER:username password /PERSISTENT:YES  

However above won't save credential for next boot. We need to use

net use x: \\10.243.212.19\demo_nas_share /SAVECRED /PERSISTENT:YES  

But this cmd only accepts the login details via a prompt and difficult to call from the script. Not sure if default windows server install has a native tool like 'Expect' to automate this. I like to avoid installing a third party utility.

NOTE: You can not combine /USER and /SAVECRED. This apparently was supported in some older version of windows though.

The other commonly suggested solutions is to put the cmd into startup folder. But I don't want to expose the password in plain text.

Can anyone recommend a native solution ?

Logwatch configured for nginx with custom log format gives empty output

Posted: 09 May 2022 03:06 AM PDT

Problem

I have configured logwatch (CentOS 5.8, x64) to include nginx, using this as a guideline and using the Apache and nginx documentation on log formats. The problem is, that I'm using a specific log format, being:

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" $scheme:$server_port '                    '$status $body_bytes_sent "$http_referer" '                    'Upstream ["$upstream_addr" ($upstream_response_time) $upstream_status : $upstream_cache_status] '                    '"$http_user_agent" "$http_x_forwarded_for"';  

(from /etc/nginx/nginx.conf)

I have translated this log format into:

$LogFormat "%h %l %u %t \"%r\" %H:%p %>s %b \"%{Referer}i\" Upstream [\"%{Upstream-address}e\" (%{Upstream-response-time}e) %{Upstream-status}e : %{Upstream-cache-status}e] \"%{User-Agent}i\" \"%{X-Forwarded-For}e\""  

for Logwatch. While studying /usr/share/logwatch/scripts/services/http, I found that anything %{...}e that is not predefined, will be ignored, so I thought this would be the best way to include these upstream-variables.

Logwatch doesn't give any output, though, considering nginx.

What I've done

I have created the following logwatch files: /usr/share/logwatch/default.conf/logfiles/nginx.conf:

########################################################  #   Define log file group for nginx  # http://8bitpipe.com/?p=516  ########################################################    # What actual file?  Defaults to LogPath if not absolute path....  LogFile = nginx/*access.log      # If the archives are searched, here is one or more line  # (optionally containing wildcards) that tell where they are...  #If you use a "-" in naming add that as well -mgt  Archive = nginx/archive/*access.log*    # Expand the repeats (actually just removes them now)  *ExpandRepeats    # Keep only the lines in the proper date range...  *ApplyhttpDate  

/usr/share/logwatch/default.conf/services/nginx.conf:

###########################################################################  # Configuration file for nginx filter  ###########################################################################    Title = "nginx"    # Which logfile group...  LogFile = nginx    # Define the log file format  #  # This is now the same as the LogFormat parameter in the configuration file  # for httpd.  Multiple instances of declared LogFormats in the httpd  # configuration file can be declared here by concatenating them with the  # '|' character.  The default, shown below, includes the Combined Log Format,  # the Common Log Format, and the default SSL log format.  #$LogFormat = "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"|%h %l %u %t \"%r\" %>s %b|%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"  $LogFormat "%h %l %u %t \"%r\" %H:%p %>s %b \"%{Referer}i\" Upstream [\"%{Upstream-address}e\" (%{Upstream-response-time}e) %{Upstream-status}e : %{Upstream-cache-status}e] \"%{User-Agent}i\" \"%{X-Forwarded-For}e\""    # The following is supported for backwards compatibility, but deprecated:  # Define the log file format  #  #   the only currently supported fields are:  #                       client_ip  #                       request  #                       http_rc  #                       bytes_transfered  #                       agent  #  #$HTTP_FIELDS = "client_ip ident userid timestamp request http_rc bytes_transfered referrer agent"  #$HTTP_FORMAT = "space     space space    brace    quote   space        space       quote   quote"  # Define the field formats  #  #   the only currently supported formats are:  #                       space = space delimited field  #                       quote = quoted ("..") space delimited field  #                       brace = braced ([..]) space delimited field    # Flag to ignore 4xx and 5xx error messages as possible hack attempts  #  # Set flag to 1 to enable ignore  # or set to 0 to disable  $HTTP_IGNORE_ERROR_HACKS = 0    # Ignore requests  # Note - will not do ANY processing, counts, etc... just skip it and go to  # the next entry in the log file.  # Examples:  # 1. Ignore all URLs starting with /model/ and ending with 1 to 10 digits  #   $HTTP_IGNORE_URLS = "^/model/\d{1,10}$"  #  # 2. Ignore all URLs starting with /model/ and ending with 1 to 10 digits and  #   all URLS starting with /photographer and ending with 1 to 10 digits  #   $HTTP_IGNORE_URLS = "^/model/\d{1,10}$|^/photographer/\d{1,10}$"  #   or simply:  #   $HTTP_IGNORE_URLS = "^/(model|photographer)/\d{1,10}$"  #  # vi: shiftwidth=3 tabstop=3 et  

And I have symlinked /usr/share/logwatch/scripts/services/http to /usr/share/logwatch/scripts/services/nginx.

This does not give any error when executing logwatch, but it also doesn't give any output, while there are definitely logfiles to parse.

Executing logwatch --service nginx --print --range All --debug 7 gives, for example:

** lot of blabla about config files **    export LOGWATCH_DATE_RANGE='all'  export LOGWATCH_OUTPUT_TYPE='unformatted'  export LOGWATCH_TEMP_DIR='/var/cache/logwatch/logwatch.vdVyg9y2/'  export LOGWATCH_DEBUG='7'    Preprocessing LogFile: nginx  '/var/log/nginx/www.xxxx1.org-access.log' '/var/log/nginx/www.xxxx2.com-access.log' '/var/log/nginx/www.xxxx3.com-access.log' '/var/log/nginx/www.xxxx4.com-access.log' '/var/log/nginx/www.xxxx5.com-access.log' '/var/log/nginx/www.xxxx6.com-access.log' '/var/log/nginx/www.xxxx7.com-access.log' '/var/log/nginx/www.xxxx8.com-access.log' '/var/log/nginx/www.xxxx9.com-access.log' '/var/log/nginx/www.xxxx10.com-access.log' '/var/log/nginx/www.xxxx11.com-access.log' '/var/log/nginx/www.xxxx12-access.log' '/var/log/nginx/www.xxxx13.nu-access.log' '/var/log/nginx/www.xxxx14.org-access.log'   2>/dev/null | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/applyhttpdate ''>/var/cache/logwatch/logwatch.vdVyg9y2/nginx    TimeFilter: Period is all    TimeFilter: SearchDate is (../.../....:..:..:..)    TimeFilter: Debug SearchDate is ( / / )  DEBUG: Inside ApplyHTTPDate...  DEBUG: Looking For: (../.../....:..:..:..)  export http_ignore_error_hacks='0'  export logformat "%h %l %u %t \"%r\" %h:%p %>s %b \"%{referer}i\" upstream [\"%{upstream-address}e\" (%{upstream-response-time}e) %{upstream-status}e : %{upstream-cache-status}e] \"%{user-agent}i\" \"%{x-forwarded-for}e\""=''    Processing Service: nginx   ( cat /var/cache/logwatch/logwatch.vdVyg9y2/nginx  |  /usr/bin/perl /usr/share/logwatch/scripts/services/nginx) 2>&1  

Why am I not getting any output?

No comments:

Post a Comment