Saturday, June 4, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Run a perl script in a browser via a link

Posted: 04 Jun 2022 03:04 PM PDT

I'm attempting to get a perl script up and running on Oracle Linux 8.5.

My Apache server and virtual hosts work with static html.

My test virtual host, fnu, has a very basic perl script named hw.pl in /var/www/fnu:

#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "Hello, World. This is fnu.";

I have a link from index.html to hw.pl, permissions open and owner set to apache:

-rwxrwxrwx. 1 apache apache 89 Jun 4 20:59 hw.pl
lrwxrwxrwx. 1 apache apache 5 Jun 4 20:59 index.html -> hw.pl

Here's the site config in /etc/httpd/conf.d:

<VirtualHost *:80>
DocumentRoot "/var/www/fnu/"
ServerName fnu.[obscured].net
ServerAlias fnu
ErrorLog /var/log/fnu/error.log
CustomLog /var/log/fnu/request.log combined
</VirtualHost>

<Directory "/var/www/fnu">
Options +ExecCGI +SymLinksIfOwnerMatch
AddHandler cgi-script .cgi .pl
</Directory>

If I point a browser to fnu/hw.pl, I get the result I expect - the script output. If I point the browser to fnu/, I get the contents of the file. So, it's following the link, but it's not running it as a perl script once it gets there. Nothing useful in /var/log/fnu/error.log.

SELinux is set to Permissive.

I appreciate any assistance.

Script output as expected

Script file contents

RDP into VM without an OS

Posted: 04 Jun 2022 02:10 PM PDT

Is it possible to RDP into a Virtualbox VM in order to install an OS on it? So in other words is it possible to RDP into a Virtualbox VM that has no OS installed yet?

I have a headless ubuntu server and I have Virtualbox installed on it. I created a VM and I would like to install Ubuntu on this VM. I start the VM, I have selected to enable remote display.

But I am unable to connect to it from my Windows 11 machine using RDP.

For network mode I have tried NAT Network and also Bridged Mode.

For RDP address I have tried both the IP number of the physical server as well as the server of the actual VM (in Bridged Mode).

I must be missing something.

Blocking Port 8080 from direct IP access but allow access via domain (Virtualhost Config)

Posted: 04 Jun 2022 02:00 PM PDT

I want to access web server from http://subdomain2.domain.com from URL but not from direct IP (http://1.1.1.1:8080). For instance, below is my virtualhost config file (site2.conf) in /etc/apache2/sites-available/ but section <VirtualHost *:8080> ... </VirtualHost> doesn't seem to have any affect.

<VirtualHost *:80>      ServerName subdomain2.domain.com      ServerAlias www.subdomain2.domain.com      ServerAdmin admin@domain.com      DocumentRoot /var/www/site2      Keepalive On        ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined        RewriteEngine on      RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]      RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]      RewriteRule .* ws://localhost:8080%{REQUEST_URI} [P]        ProxyPreserveHost On      ProxyPass "/stream" ws://localhost:8080/ retry=0 timeout=5      ProxyPass "/" http://localhost:8080/ retry=0 timeout=5      ProxyPassReverse / http://localhost:8080/  </VirtualHost>    <VirtualHost *:8080>          ServerName 1.1.1.1:8080          ServerAlias  2001:0db8:85a3:0000:0000:8a2e:0370:7334:8080          DocumentRoot /var/www/site2          <Location />              Require all denied          </Location>          ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  

For http://subdomain1.domain.com, I have already implemented this successfully in (000-default.conf) in /etc/apache2/sites-available/ and its working fine after adding section <VirtualHost *:80> ... </VirtualHost>. Below is the code for reference:

<VirtualHost *:80>          ServerName subdomain1.domain.com          ServerAlias www.subdomain1.domain.com          ServerAdmin admin@domain.com          DocumentRoot /var/www            ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>    <VirtualHost *:80>          ServerName 1.1.1.1          ServerAlias  2001:0db8:85a3:0000:0000:8a2e:0370:7334          DocumentRoot /var/www          <Location />              Require all denied          </Location>            ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  

Operation of certbot and nginx

Posted: 04 Jun 2022 01:21 PM PDT

I was hoping someone could satisfy my curiosity about how certbot and nginx interact during renewal time.

On my Linux host, I set up certbot and the certbot-nginx plugin. I had some regular nginx site definitions set up in /etc/nginx/conf.d, and when registering certs the certbot-nginx plugin added the TLS information to these individual nginx conf files for my various sites.

To renew, I have the following in a systemd service triggered once a day :

/bin/certbot renew --non-interactive --agree-tos --no-self-upgrade --post-hook "systemctl reload nginx"  

Looking at the systemd logs with journalctl, I can see the certs being updated when appropriate.

It works, but it's bugging me that I can't find anything in my config that tells nginx to let requests through on '.well-known/acme-challenge' for any of my domains. It's not in any of my nginx config files. I'm just wondering how certbot is able to offer up the token on that path. I've grepped chunks of the filesystem for 'well-known' and 'acme-challenge', but nothing turned up. The only 'root' directives in my site nginx config files point to their respective static sites. There's nothing but the default 'It Works' site in /usr/share/nginx/html.

Does certbot spin up a server of its own? I was dubious as I didn't think it would be able to bind to port 80 at the same time as nginx.

Suspending OneDrive syncing while a specific process is running

Posted: 04 Jun 2022 01:29 PM PDT

I'm sharing music files on my OneDrive folder. (This is by choice. I'm switching between my studio, and my desk where I do the editing)

When using my music editing software, there are very frequent changes made to many files, which causes two issues:

  • Unnecessary network traffic
  • Files being locked by OneDrive (even momentarily), which causes problem while editing

Is there a way to suspend OneDrive's syncing while another process is running?

(I could probably write a small service that handles that - polling, checking for my music software's process, and acting accordingly, but prefer to avoid doing so)

Nginx - Session persistency over two chained Nginx Instances

Posted: 04 Jun 2022 01:09 PM PDT

I have a very special use case - and I'm wondering if there isn't a better solution.

Use case: Several users should be able to reach one of about 1000 target instances (C1 to C1000) coming from an Nginx instance (A) at the same time. Unfortunately, 100 target instances each have to be located behind a separate Nginx instance (B1 to B10) as a reverse proxy.

Problem: If several users jump in parallel from their respective client to A and from there then to one of the B Nginx instances and from there then to one of the underlying C target instances, the question arises of how to ensure the session consistency of each individual session. In addition, C target instances must be able to set session cookies on the respective client that comes to this C target instance via the A-Nginx instance, from there the respective B-Nginx instance.

My solution: Each user who calls A gets a separate port. The connections between A and the individual B instances run via separate ports, e.g. from A to B1 10000, A to B2 11000, etc., A to B10 20000. The connections from B instances to the C target instances are maintained on dedicated ports as well, e.g. connection B1 to C0001 on port 10001, connection B1 to C0002 on port 10002, etc..

I hate this construct, but it works. The question is - what am I missing. I can't imagine there isn't a more elegant way.

Does anybody has a good idea how to solve the problem with in a more elegant way? Maybe somehow with session cookies?

KInd regards, D.

Appcmd.exe: How to set config the virtual directory by using it parent path?

Posted: 04 Jun 2022 12:43 PM PDT

Using Appcmd.exe, I am trying to change the configuration of the virtual directory using its parent path "IIS://localhost/w3svc/5/ROOT/Site123" where, 5 - is the parent site ID, Site123 - is name of the virtual directory. Below is the cmd I have tried to set config.

C:\\WINDOWS\\system32\\inetsrv\\appcmd.exe set config "IIS://localhost/w3svc/5/ROOT/Site123" /section:system.webServer/rewrite/rules /"[name='ReverseProxyInboundRule'].enabled:false" /commit:site  

But I am getting below error.

ERROR ( message:Cannot find SITE object with identifier "IIS://localhost/w3svc/5/ROOT/Site123". )  

It would be helpful if any helps me to set configuration for virtual directory using its path "IIS://localhost/w3svc/5/ROOT/Site123", which is the only input I having to set configuration. Thanks in advance.

Nginx: How to rewrite the path to the error page?

Posted: 04 Jun 2022 11:47 AM PDT

I have an /api location that contains an HTTP authorization. I want my 401 page to be displayed on failed authorization.

Here is my file structure (If it's important, I'm running nginx on Windows)

nginx  |─html    |    | 401.html    |    |─40x_files      |      | script.js      | styles.css  

Here is my config

server {      listen 80;      server_name  example.com;         error_page 401 /401.html;      location = /401.html {          auth_basic off;          root html;      }        location /api {          auth_basic "Restricted area";          auth_basic_user_file ../conf/conf.d/htpasswd;          proxy_pass   http://127.0.0.1:8080/api;      }  }  

I got two problems

  1. If I just type example.com/401.html in the browser, I get an error that it cannot access files in 40x_files folder (here is filed of 401.html page)

  2. If I fail the authorization, then for some reason I am redirected to this path api/40x_files/script.js, how can I return it to correct state?

I suppose I need to do some kind of rewrite, but I don't really understand how to arrange it correctly

Can't bind volume between container and other container (docker-in-docker)

Posted: 04 Jun 2022 11:48 AM PDT

I try to setup my gitlab runner on my M1 MacBook for some environment testing. My CI workflow is quite simple : build -> test -> deploy. I use docker executor and for testing tasks I use docker-compose because I have to deploy a database beside the service I want to test.

My CI works perfectly on a EC2 running Ubuntu. But when I tried to run my runner on macOS, my volume ./backend is not bind with /app. I tried to ls /app on backend container and with the EC2 all files and folders are here but not when I run the runner on macOS.

gitlab-ci.yml (this is just the backend_test part:

test_backend:    stage: test    needs: ["build_backend_image"]    image: docker    services:      - docker:dind    before_script:      - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY      - DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}      - mkdir -p $DOCKER_CONFIG/cli-plugins      - apk add curl      - curl -SL https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose      - chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose    script:      - docker compose -f docker-compose-ci.yml up -d db      - docker exec db sh mongo_init.sh      - docker compose -f docker-compose-ci.yml up -d backend      - pwd      - ls      - ls backend      - docker exec backend pwd      - docker exec backend ls -l      - docker exec backend ls / -l      - docker exec backend pip3 install --no-cache-dir --upgrade -r requirements-test.txt      - docker exec backend pytest test --junitxml=report.xml -p no:cacheprovider    interruptible: true    artifacts:      when: always      reports:        junit: backend/report.xml    rules:      - if: $CI_PIPELINE_SOURCE == 'merge_request_event'      - if: $CI_COMMIT_BRANCH == 'develop'      - if: $CI_COMMIT_BRANCH == 'master'    

gitlab-runner/config.toml:

concurrent = 2  check_interval = 0    [session_server]    session_timeout = 1800      [[runners]]    name = "Macbook"    url = "https://gitlab.com/"    token = "xxxxxxxxxx"    executor = "docker"    [runners.custom_build_dir]    [runners.cache]      [runners.cache.s3]      [runners.cache.gcs]      [runners.cache.azure]    [runners.docker]      tls_verify = false      image = "docker:20.10.13"      privileged = true      disable_entrypoint_overwrite = false      oom_kill_disable = false      disable_cache = false      volumes = ["/certs/client", "/cache"]      shm_size = 0  

EC2 runner ls output:

$ docker exec backend ls /app -l  total 36  -rw-rw-rw-  1 root root  285 Jun  3 15:30 Dockerfile  -rw-rw-rw-  1 root root  325 May 20 23:49 DockerfileTest  -rw-rw-rw-  1 root root   48 May 20 23:49 __init__.py  drwxrwxrwx 11 root root 4096 Jun  4 17:17 app  -rw-rw-rw-  1 root root 4286 Jun  4 16:41 favicon.ico  -rw-rw-rw-  1 root root   33 Jun  4 16:41 requirements-test.txt  -rw-rw-rw-  1 root root 1673 Jun  4 16:41 requirements.txt  drwxrwxrwx  6 root root 4096 Jun  4 16:41 test  

macOS runner ls output:

$ docker exec backend ls /app -l  total 0  

docker-compose-ci.yml:

services:    backend:      container_name: backend      image: $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_SHA      volumes:        - ./backend:/app      networks:        default:      ports:        - 8000:8000        - 587:587        - 443:443      depends_on:        - db      db:      container_name: db      image: mongo      volumes:        - ./mongo_init.sh:/mongo_init.sh:ro      networks:        default:      environment:        MONGO_INITDB_DATABASE: xxxxxxx        MONGO_INITDB_ROOT_USERNAME: admin        MONGO_INITDB_ROOT_PASSWORD: admin      ports:        - 27017:27017      networks:    default:      driver: bridge    

How to permit only certain e-mail clients for IMAP access

Posted: 04 Jun 2022 11:06 AM PDT

My company is giving out new Android smartphones to employees, and they should be able to manage their e-mail on them. Currently, only access via webmail is enabled, but the mobile webmail client (Zimbra) is awkward and very feature-limited. Therefore, granting access to e-mail clients (mobile apps) seems to be a good move. However, client software would not be controlled by the company in this scenario, so I need to figure out a way to limit e-mail access to client apps installed by the company, on the issued smartphones. What is currently considered best practice for this (in an open-standards-based, non-MS environment)?

I found articles that suggest S/MIME certificates, but they seem to be about much more then just regulating client access (also encryption etc).

Would implementation of S/MIME for mobile/desktop clients require doing the same for webmail sessions (installing certificates in browsers...), or could a standard server be configured in such a way as to require certificate authentication only from mobile/desktop clients but not from browsers?

Network Solutions DNS not always returning DKIM and SPF records

Posted: 04 Jun 2022 11:02 AM PDT

If there is a more appropriate place to ask this or it is a duplicate, please tell me.

I have a client who hosts their domains with Network Solutions. Some of their emails were bouncing due to stricter authorization requirements enforced on certain recieving mail-servers. This was because they had no DKIM, SPF, or DMARC policy. So I set SPF and DKIM up for them-- DMARC pending a solution to the following:

Problem is, the records seem to be failing to retrieve quite frequently. Specifically this is from ns89.worldnic.com and ns90.worldnic.com . I've used both dig and mxtoolbox.com to perform repeated tests, and it's a crapshoot.

I have set many, many DKIM, SPF, and DMARC records up, or provided needed tweaks and changes to them-- probably hundreds of times by now. I am aware of the 48 hour rule. Definitely been more than 48 hours since I set the SPF record. Not quite 24 for the DKIM.

In the many instances of setting up these kinds of records I have rarely had to wait more than 5 or 10 minutes to get solid record retrievals. Usually this is with other registrars. Not sure when the last time I set up SPF, DKIM, and DMARC on Network Solutions was.

Client is still getting some bounces, specifically stating due to missing SPF records, which is exactly why I started repeatedly testing retrieval, and sure enough-- I'd say it's about 80 to 90% successful for those two specific nameservers. That means there is a minimum 10 to 20% bounce rate for this particular recieving MX, which has adopted a hard stance on the existence of SPF and/or DKIM authentication mechanisms.

Is there a problem with their nameservers, is there something else that could cause their nameservers to frequently not return records, or am I just being impatient? I've never had this problem before, but cannot find any information reporting outages with their nameservers. Are they under a DDOS attack? They seem to be replying, but the replies appear unstable.

My Google-fu is, unfortunately only turning up advertisements on how great NetSol is, and a bunch of completely unrelated stuff involving networking, mostly ads.

The SPF and DKIM test good according to all the following methods below, if they are successfully retrieved, so the problem is definitely not in the keys themselves.

I.e. using the following patterns:

SPF txt record, ttl 15 mins, hostname @

v=spf1 include:emailsrvr.com ~all  

DKIM txt record, ttl 15 mins, hostname 1234-word._domainkey

v=DKIM1; k=rsa; p=[PUBLIC KEY]  

The SPF and DKIM record are provided by the mailserver host, a billion+ dollar company (not NetSol, and the DKIM private key is handled internally by them.


REPLICATION:

To replicate, set up new SPF and DKIM records on Network Solutions, then go to mxtoolbox.com and repeatedly test for those records.

If you have a domain with existing SPF and DKIM records that have been in use for a long time, repeatedly test on mxtoolbox.com for those records. What I really want to know is the retrieval success rate for existing SPF and DKIM records, but I have no way to test this.

If you have an actual mail server at your disposal, set up the corresponding DKIM private key on it.

Send emails to a gmail account and use "Show Original" to view SPF and DKIM auth-check status for that email, or more likely you will simply get a bounce within 5 minutes.

Sending an email to check-auth@verifier.port25.com and you will instantly get an email back showing success or failure status for SPF, DKIM, and Reverse DNS.

Basically, I'm trying to establish whether this is a temporary issue, or I need to recommend they move to another registrar. They are a big company, so high failure rates such as I've been experiencing with NetSol are not going to be acceptable.


TRANSCRIPTS:

Transcript examples of successful and failed SPF and DKIM lookups using mxtoolbox.com:

SUCCESSFUL SPF LOOKUP

- - - txt:[email-domain]     1 k.gtld-servers.net xxx.52.178.30 NON-AUTH 23 ms Received 2 Referrals , rcode=NO_ERROR  [email-domain]. 172800 IN NS ns89.worldnic.com,[email-domain]. 172800 IN NS ns90.worldnic.com,   2 ns90.worldnic.com 162.159.27.117 AUTH 14 ms Received 1 Answers , rcode=NO_ERROR  [email-domain]. 900 IN TXT v=spf1 include:[mx-domain] ~all,  Record returned is an RFC 4408 TXT record.  MAIL FROM:  RETURN-PATH:    - - Ranges  - TXT:[mx-domain]  xxx.166.43.0/24  xxx.20.86.8  xxx.20.161.0/25  xxx.47.34.7  xxx.203.187.0/25  xxx.106.54.0/25  xxx.232.172.40    - - Subqueries  TXT:[mx-domain]    - - Results  TXT:[mx-domain] = SoftFail  TXT:[email-domain] = SoftFail  LookupServer 127ms  

FAILED SPF LOOKUP

- - - txt:[email-domain]     1 l.gtld-servers.net xxx.41.162.30 NON-AUTH 22 ms Received 2 Referrals , rcode=NO_ERROR  [email-domain]. 172800 IN NS ns89.worldnic.com,[email-domain]. 172800 IN NS ns90.worldnic.com,   2 ns90.worldnic.com xxx.159.27.117 AUTH 91 ms Received 1 Referrals , rcode=NO_ERROR  [email-domain]. 3600 IN SOA mname=NS89.WORLDNIC.com rname=namehost.WORLDNIC.com serial=115072811,  LookupServer 147ms  

SUCCESSFUL DKIM LOOKUP

- - - dkim:20220603-ywjypl4z._domainkey.[email-domain]     1 h.gtld-servers.net xxx.54.112.30 NON-AUTH 24 ms Received 2 Referrals , rcode=NO_ERROR  [email-domain]. 172800 IN NS ns89.worldnic.com,[email-domain]. 172800 IN NS ns90.worldnic.com,   2 ns90.worldnic.com xxx.159.27.117 AUTH 2 ms Received 1 Answers , rcode=NO_ERROR  20220603-[random-letters]._domainkey.[email-domain]. 900 IN TXT v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC0bwKMEJDmRSXMb99XZkRgh3qo6rxyCkdz09Y73HArqQWYdDuD9a8L8iGuCU4PmlMNJJCKU3OJuWr0O4YxNb32NaAko1+5uzhLTye/b8HZp6WASNQFGqPdpCQPMusCDx/FZVvNmf+TNUXF7XQ+7S9dYd5OoX2nwOLuxH6z5IUP0wIDAQAB,  Record returned is an RFC 6376 TXT record.  LookupServer 26ms  

FAILED DKIM LOOKUP

- - - dkim:20220603-ywjypl4z._domainkey.[email-domain]     1 b.gtld-servers.net XXX.33.14.30 NON-AUTH 0 ms Received 2 Referrals , rcode=NO_ERROR  [email-domain]. 172800 IN NS ns89.worldnic.com,[email-domain]. 172800 IN NS ns90.worldnic.com,   2 ns90.worldnic.com XXX.159.27.117 AUTH 86 ms Received 1 Referrals , rcode=NO_ERROR  [email-domain]. 3600 IN SOA mname=NS89.WORLDNIC.com rname=namehost.WORLDNIC.com serial=115072811,  LookupServer 86ms      

SAMPLE DIG OUTPUT:

user@onyx ~ $ dig [email-domain] txt 20220603-asdf._domainkey.[email-domain]    ; <<>> DiG 9.10.3-P4-Ubuntu <<>> [email-domain] txt 20220603-asdf._domainkey.[email-domain]  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 893  ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;[email-domain].        IN  TXT    ;; ANSWER SECTION:  [email-domain]. 729 IN  TXT "v=spf1 include:emailsrvr.com ~all"    ;; Query time: 108 msec  ;; SERVER: 127.0.1.1#53(127.0.1.1)  ;; WHEN: Sat Jun 04 10:55:36 PDT 2022  ;; MSG SIZE  rcvd: 90    ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24138  ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;20220603-asdf._domainkey.[email-domain]. IN A    ;; Query time: 1 msec  ;; SERVER: 127.0.1.1#53(127.0.1.1)  ;; WHEN: Sat Jun 04 10:55:36 PDT 2022  ;; MSG SIZE  rcvd: 73        user@onyx ~ $ dig [email-domain] txt 20220603-asdf._domainkey.[email-domain]    ; <<>> DiG 9.10.3-P4-Ubuntu <<>> [email-domain] txt 20220603-asdf._domainkey.[email-domain]  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13290  ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;[email-domain].        IN  TXT    ;; AUTHORITY SECTION:  [email-domain]. 3409    IN  SOA NS89.WORLDNIC.com. namehost.WORLDNIC.com. 122060319 10800 3600 604800 3600    ;; Query time: 17 msec  ;; SERVER: 127.0.1.1#53(127.0.1.1)  ;; WHEN: Sat Jun 04 10:55:38 PDT 2022  ;; MSG SIZE  rcvd: 103    ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40465  ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;20220603-asdf._domainkey.[email-domain]. IN A    ;; Query time: 1 msec  ;; SERVER: 127.0.1.1#53(127.0.1.1)  ;; WHEN: Sat Jun 04 10:55:38 PDT 2022  ;; MSG SIZE  rcvd: 73    user@onyx ~ $   

Set the timeout for SSH connection establishment

Posted: 04 Jun 2022 12:08 PM PDT

How to set the timeout for linux ssh connection establishment? When a certain external connection comes in, eg: ssh admin@192.168.0.100, and it is in the stage of entering the password. After a minute, the connection is dropped. This requires modifying the ssh config file configuration on the client or server side?

How to get instace id of aws ec2 from ip address?

Posted: 04 Jun 2022 11:32 AM PDT

With below aws cli command, I can get public ip of the ec2 instance if the instance id is known.

aws ec2 describe-instances --instance-ids i-0d577af80725c9a91 --query 'Reservations[*].Instances[*].PublicIpAddress' --output text --profile testsubaccount --region us-east-1  

184.72.83.182

But, how to do the reverse?

I mean, when I know the public ip address, how to get the instace id of the ec2 instance?

aws ec2 describe-instances --instance-ids i-038317982dc6a7c64 --query 'Reservations[*].Instances[*].PublicIpAddress' --output text --profile uday-subaccount --region us-east-1  

54.174.85.61

aws ec2 describe-network-interfaces --filters Name=addresses.private-ip-address,Values=54.174.85.61 --profile uday-subaccount --region us-east-1  

{ "NetworkInterfaces": [] }

systemd-launched service exists in /var/lib/systemd but not visible in list-units, list-unit-files or status

Posted: 04 Jun 2022 10:48 AM PDT

I'm seeing the following in /var/log/auth.log

Jun  1 06:32:49 [redacted] su: (to nobody) root on none  Jun  1 06:32:49 [redacted] su: pam_unix(su:session): session opened for user nobody(uid=65534) by (uid=0)  Jun  1 06:32:49 [redacted] systemd: pam_unix(systemd-user:session): session opened for user nobody(uid=65534) by (uid=0)  Jun  1 06:32:49 [redacted] su: pam_unix(su:session): session closed for user nobody  

Matching entries in /var/log/daemon.log

Jun  1 06:32:49 [redacted] systemd[1]: Created slice User Slice of UID 65534.  Jun  1 06:32:49 [redacted] systemd[1]: Starting User Runtime Directory /run/user/65534...  Jun  1 06:32:49 [redacted] systemd[1]: Finished User Runtime Directory /run/user/65534.  Jun  1 06:32:49 [redacted] systemd[1]: Starting User Manager for UID 65534...  Jun  1 06:32:49 [redacted] systemd[39580]: Queued start job for default target Main User Target.  Jun  1 06:32:49 [redacted] systemd[39580]: Created slice User Application Slice.  Jun  1 06:32:49 [redacted] systemd[39580]: Reached target Paths.  Jun  1 06:32:49 [redacted] systemd[39580]: Reached target Timers.  Jun  1 06:32:49 [redacted] systemd[39580]: Listening on GnuPG network certificate management daemon.  Jun  1 06:32:49 [redacted] systemd[39580]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).  Jun  1 06:32:49 [redacted] systemd[39580]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).  Jun  1 06:32:49 [redacted] systemd[39580]: Listening on GnuPG cryptographic agent (ssh-agent emulation).  Jun  1 06:32:49 [redacted] systemd[39580]: Listening on GnuPG cryptographic agent and passphrase cache.  Jun  1 06:32:49 [redacted] systemd[39580]: Reached target Sockets.  Jun  1 06:32:49 [redacted] systemd[39580]: Reached target Basic System.  

Looking in /usr/lib/systemd ...

$ sudo find /usr/lib/systemd -name gpg\* -o -name gnupg\*  /usr/lib/systemd/user/gpg-agent-ssh.socket  /usr/lib/systemd/user/gpg-agent.service  /usr/lib/systemd/user/gpg-agent-browser.socket  /usr/lib/systemd/user/gpg-agent-extra.socket  /usr/lib/systemd/user/gpg-agent.socket  

However, there's no gpg-agent listed in list-units or list-unit-files, and the status command returns an error:

$ sudo systemctl status gpg-agent  Unit gpg-agent.service could not be found.  

I also tried --user:

$ sudo systemctl status --user  Failed to connect to bus: $DBUS_SESSION_BUS_ADDRESS and $XDG_RUNTIME_DIR not defined (consider using --machine=<user>@.host --user to connect to bus of other user)  

Clearly I'm missing something here...

Where in systemd is the configuration file that is executing su to user nobody and launching gpg-agent?

Access mysql from another computer connected with crossover cable

Posted: 04 Jun 2022 12:39 PM PDT

I have two computers, and i am trying to access mysql via network on port 3322 (mysql listens on this port)

Computer A Server: Windows 10 home Added firewall exception to port 3322 (mysql installed and listens on this port) Static IP address: 192.168.0.1 Subnet mask: 255.255.255.0 connected via onboard ethernet port

Computer B Client: Windows 10 home Static IP address: 192.168.0.3 Subnet mask: 255.255.255.0 connected via usb-to-ethernet port (does not have onboard ethernet)

Both are on the same workgroup, Workgroup Both computers are connected via a crossover cable (tested ok with cable tester)

First

"B" pinging "A" or vice versa results in "request timed out"

"A" can ping itself & 127.0.0.1

"B" can ping itself & 127.0.0.1

Second

Mysql cannot connect since both computers don't seem to see each other

Designate installation on OpenStack failed

Posted: 04 Jun 2022 11:46 AM PDT

While trying to install designate on OpenStack Victoria on Ubuntu 20.04 using this guide https://docs.openstack.org/designate/victoria/install/install-ubuntu.html

I'm getting this error

Updating Pools Configuration  ****************************  2021-06-04 18:35:56.634 2275260 CRITICAL designate [designate-manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: Remote error: CantStartEngineError No sql_connection parameter is established  ['Traceback (most recent call last):\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3/dist-packages/designate/rpc.py", line 238, in exception_wrapper\n    return f(self, *args, **kwargs)\n', '  File "/usr/lib/python3/dist-packages/designate/central/service.py", line 2277, in find_pool\n    return self.storage.find_pool(context, criterion)\n', '  File "/usr/lib/python3/dist-packages/designate/central/service.py", line 227, in storage\n    self._storage = storage.get_storage(storage_driver)\n', '  File "/usr/lib/python3/dist-packages/designate/storage/__init__.py", line 36, in get_storage\n    return cls()\n', '  File "/usr/lib/python3/dist-packages/designate/storage/impl_sqlalchemy/__init__.py", line 40, in __init__\n    super(SQLAlchemyStorage, self).__init__()\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/base.py", line 72, in __init__\n    self.engine = session.get_engine(self.get_name())\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/session.py", line 47, in get_engine\n    facade = _create_facade_lazily(cfg_group)\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/session.py", line 38, in _create_facade_lazily\n    _FACADES[cache_name] = session.EngineFacade(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 1293, in __init__\n    self._factory._start(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 508, in _start\n    self._setup_for_connection(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 531, in _setup_for_connection\n    raise exception.CantStartEngineError(\n', 'oslo_db.exception.CantStartEngineError: No sql_connection parameter is established\n'].  2021-06-04 18:35:56.634 2275260 ERROR designate Traceback (most recent call last):  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/bin/designate-manage", line 10, in <module>  2021-06-04 18:35:56.634 2275260 ERROR designate     sys.exit(main())  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/designate/cmd/manage.py", line 123, in main  2021-06-04 18:35:56.634 2275260 ERROR designate     fn(*fn_args)  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/designate/manage/pool.py", line 145, in update  2021-06-04 18:35:56.634 2275260 ERROR designate     pool = self.central_api.find_pool(  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/designate/central/rpcapi.py", line 343, in find_pool  2021-06-04 18:35:56.634 2275260 ERROR designate     return self.client.call(context, 'find_pool', criterion=criterion)  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/client.py", line 509, in call  2021-06-04 18:35:56.634 2275260 ERROR designate     return self.prepare().call(ctxt, method, **kwargs)  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/client.py", line 175, in call  2021-06-04 18:35:56.634 2275260 ERROR designate     self.transport._send(self.target, msg_ctxt, msg,  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/oslo_messaging/transport.py", line 123, in _send  2021-06-04 18:35:56.634 2275260 ERROR designate     return self._driver.send(target, ctxt, message,  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 652, in send  2021-06-04 18:35:56.634 2275260 ERROR designate     return self._send(target, ctxt, message, wait_for_reply, timeout,  2021-06-04 18:35:56.634 2275260 ERROR designate   File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 644, in _send  2021-06-04 18:35:56.634 2275260 ERROR designate     raise result  2021-06-04 18:35:56.634 2275260 ERROR designate oslo_messaging.rpc.client.RemoteError: Remote error: CantStartEngineError No sql_connection parameter is established  2021-06-04 18:35:56.634 2275260 ERROR designate ['Traceback (most recent call last):\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3/dist-packages/designate/rpc.py", line 238, in exception_wrapper\n    return f(self, *args, **kwargs)\n', '  File "/usr/lib/python3/dist-packages/designate/central/service.py", line 2277, in find_pool\n    return self.storage.find_pool(context, criterion)\n', '  File "/usr/lib/python3/dist-packages/designate/central/service.py", line 227, in storage\n    self._storage = storage.get_storage(storage_driver)\n', '  File "/usr/lib/python3/dist-packages/designate/storage/__init__.py", line 36, in get_storage\n    return cls()\n', '  File "/usr/lib/python3/dist-packages/designate/storage/impl_sqlalchemy/__init__.py", line 40, in __init__\n    super(SQLAlchemyStorage, self).__init__()\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/base.py", line 72, in __init__\n    self.engine = session.get_engine(self.get_name())\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/session.py", line 47, in get_engine\n    facade = _create_facade_lazily(cfg_group)\n', '  File "/usr/lib/python3/dist-packages/designate/sqlalchemy/session.py", line 38, in _create_facade_lazily\n    _FACADES[cache_name] = session.EngineFacade(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 1293, in __init__\n    self._factory._start(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 508, in _start\n    self._setup_for_connection(\n', '  File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 531, in _setup_for_connection\n    raise exception.CantStartEngineError(\n', 'oslo_db.exception.CantStartEngineError: No sql_connection parameter is established\n'].  2021-06-04 18:35:56.634 2275260 ERROR designate   

Here are the logs for designate

root@openstack:/etc/designate# tail /var/log/designate/designate-  designate-agent.log    designate-api.log      designate-central.log    root@openstack:/etc/designate# tail /var/log/designate/designate-agent.log   2021-06-03 16:08:51.127 2080278 INFO designate.agent.handler [-] Agent masters: []  2021-06-03 16:08:51.130 2080278 INFO designate.metrics [-] Statsd disabled  2021-06-03 16:08:51.141 2080278 INFO designate.service [-] Starting agent service (version: 11.0.0)  2021-06-03 16:08:51.141 2080278 INFO designate.utils [-] Opening TCP Listening Socket on 0.0.0.0:5358  2021-06-03 16:08:51.141 2080278 INFO designate.utils [-] Opening UDP Listening Socket on 0.0.0.0:5358  2021-06-03 16:08:51.142 2080278 INFO designate.backend.agent_backend.impl_bind9 [-] Started bind9 backend  2021-06-03 16:08:51.164 2080278 INFO designate.service [req-82426d68-7768-46cf-a4e2-6858015e98b9 - - - - -] _handle_tcp thread started  2021-06-03 16:08:51.165 2080278 INFO designate.service [req-82426d68-7768-46cf-a4e2-6858015e98b9 - - - - -] _handle_udp thread started    root@openstack:/etc/designate# tail /var/log/designate/designate-api.log   2021-06-04 16:52:01.261 2080402 INFO eventlet.wsgi.server [req-459cfdd5-8791-4031-a056-f5a9ecfe2dea - - - - -] 127.0.0.1 "GET /v2/zones HTTP/1.1" status: 503  len: 434 time: 0.0057116  2021-06-04 17:31:41.646 2080402 INFO eventlet.wsgi.server [req-45c72086-5a93-453e-83e6-8c9a889efe30 - - - - -] 127.0.0.1 "GET / HTTP/1.1" status: 200  len: 312 time: 0.0010707  2021-06-04 17:31:41.650 2080402 CRITICAL keystonemiddleware.auth_token [req-45c72086-5a93-453e-83e6-8c9a889efe30 - - - - -] Unable to validate token: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb047d02e0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED')): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb047d02e0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))  2021-06-04 17:31:41.651 2080402 INFO eventlet.wsgi.server [req-45c72086-5a93-453e-83e6-8c9a889efe30 - - - - -] 127.0.0.1 "GET /v2/zones HTTP/1.1" status: 503  len: 434 time: 0.0027246  2021-06-04 17:32:18.532 2080402 INFO eventlet.wsgi.server [req-aadb869e-8c80-470a-9814-54b74823f060 - - - - -] 127.0.0.1 "GET / HTTP/1.1" status: 200  len: 312 time: 0.0015867  2021-06-04 17:32:18.538 2080402 CRITICAL keystonemiddleware.auth_token [req-aadb869e-8c80-470a-9814-54b74823f060 - - - - -] Unable to validate token: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb0492b100>: Failed to establish a new connection: [Errno 111] ECONNREFUSED')): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb0492b100>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))  2021-06-04 17:32:18.540 2080402 INFO eventlet.wsgi.server [req-aadb869e-8c80-470a-9814-54b74823f060 - - - - -] 127.0.0.1 "GET /v2/zones HTTP/1.1" status: 503  len: 434 time: 0.0040829  2021-06-04 18:11:38.608 2080402 INFO eventlet.wsgi.server [req-42aeb27c-e929-4440-aa4a-dba68ed89c66 - - - - -] 127.0.0.1 "GET / HTTP/1.1" status: 200  len: 312 time: 0.0028553  2021-06-04 18:11:38.616 2080402 CRITICAL keystonemiddleware.auth_token [req-42aeb27c-e929-4440-aa4a-dba68ed89c66 - - - - -] Unable to validate token: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb047d0760>: Failed to establish a new connection: [Errno 111] ECONNREFUSED')): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://127.0.0.1:35357: HTTPSConnectionPool(host='127.0.0.1', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7feb047d0760>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))  2021-06-04 18:11:38.618 2080402 INFO eventlet.wsgi.server [req-42aeb27c-e929-4440-aa4a-dba68ed89c66 - - - - -] 127.0.0.1 "GET /v2/zones HTTP/1.1" status: 503  len: 434 time: 0.0056593      root@openstack:/etc/designate# tail /var/log/designate/designate-central.log   2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall   File "/usr/lib/python3/dist-packages/designate/sqlalchemy/session.py", line 38, in _create_facade_lazily  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall     _FACADES[cache_name] = session.EngineFacade(  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall   File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 1293, in __init__  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall     self._factory._start(  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall   File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 508, in _start  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall     self._setup_for_connection(  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall   File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 531, in _setup_for_connection  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall     raise exception.CantStartEngineError(  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall oslo_db.exception.CantStartEngineError: No sql_connection parameter is established  2021-06-04 18:40:35.151 2080147 ERROR oslo.service.loopingcall   

Any help or suggestions would be great.

Network File Share access across multiple domains under the same forest

Posted: 04 Jun 2022 11:01 AM PDT

My knowledge of Active Directory and network setup is limited. In the environment of this example, there are multiple domains under one forest. It has come up that users of Domain A have access to network file shares on Domain B, Domain C, etc.

What causes this cross domain access? I initially thought it was due to the 'Everyone' AD group being assigned access rights to various network shares, but there are instances of access where the 'Everyone' group is not present.

Ultimately, how would you go about securing file access so that Domain A does not have access to the network file shares of another Domain under the Forest? Any insight into what is causing this event or how to better secure access so that cross Domain file access is not possible would be highly appreciated.

Thanks.

can't uninstall exchange 2016 management tools on windows 10

Posted: 04 Jun 2022 01:00 PM PDT

I tried to uninstall exchange 2016 management tools from windows 10 but failed and forced to exit.

Exchange 2016 management tools

Then, I run exchange 2016 setup and got following

Unexpected 404 error on all routes laravel application all of a sudden - NGINX|PHP-FPM

Posted: 04 Jun 2022 12:00 PM PDT

I have the following nginx config file

##  # You should look at the following URL's in order to grasp a solid understanding  # of Nginx configuration files in order to fully unleash the power of Nginx.  # http://wiki.nginx.org/Pitfalls  # http://wiki.nginx.org/QuickStart  # http://wiki.nginx.org/Configuration  #  # Generally, you will want to move this file somewhere, and start with a clean  # file but keep this around for reference. Or just disable in sites-enabled.  #  # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.  ##    # Default server configuration  #      server {        root /var/www/open_final/current;               index index.html index.htm;            # Make site accessible from http://localhost/          server_name app.mypersonaldomain.co;          if ($http_x_forwarded_proto != "https") {            rewrite ^(.*)$ https://$server_name$REQUEST_URI permanent;          }    #        if ($http_user_agent ~* '(iPhone|iPod|android|blackberry)') {  #         return 301 https://mobile.mypersonaldomain.co;  #        }      location / {                  # First attempt to serve request as file, then                  # as directory, then fall back to displaying a 404.          #       try_files $uri $uri/ =404;                  try_files $uri $uri/ /index.html;          # Uncomment to enable naxsi on this location                  # include /etc/nginx/naxsi.rules          }  }      server {        root /var/www/open-backend-v2/current/public;      index index.php index.html index.htm;        server_name localhost v2-api.mypersonaldomain.co;        location / {          try_files $uri $uri/ /index.php$is_args$args;      }        error_page 404 /404.html;      error_page 500 502 503 504 /50x.html;      location = /50x.html {          root /usr/share/nginx/html;      }        location ~ \.php$ {          try_files $uri =404;          fastcgi_split_path_info ^(.+\.php)(/.+)$;          fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          include fastcgi_params;      }  }  

I have two applications running on this nginx server. One is a Laravel(PHP) application and an Angular application (Front-end) running. I have noticed that last week, all the backend application (PHP) routes started throwing 404 Not Found errors. I restarted my nginx, still it was coming. Finally I restarted my aws instance and it started working fine. Again yesterday all of a sudden , the URLs started throwing 404 all of a sudden and I had to restart the instance.

The front-end application was loading but the backend (Laravel-PHP) urls was throwing 404.

I suspect if its some hacker doing it. In the past 2 years this was not happening and it started coming from last week.

What could be the reason for it? Is it like someone tampering the .htaccess file or is it something to do with nginx config. But if so why on the laravel application routes are showing 404.

Need help on this. What could be the reason for this ? Has anyone faced this issue ?

Why does nginx require default ssl server to have a certificate?

Posted: 04 Jun 2022 02:07 PM PDT

I have a testing site with 2 domains and I want to enable SSL for both.

Also, I want to have a default server to display an error page if a client is accessing some page out of those 2 domains.

This is my nginx.conf:

server {      listen 443 ssl;      server_name a.com;      ssl_certificate a.cert;      ssl_certificate_key a.key;  }  server {      listen 443 ssl;      server_name b.com;      ssl_certificate b.cert;      ssl_certificate_key b.key;  }  server {      listen 443 default_server ssl;      ...  }  

Problem: The default server must have a valid cert/key pair, even if the client is connecting to a.com or b.com.

If I don't specify a cert/key pair on the default ssl server, nginx gives me this error:

no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking

I don't understand the logic here. With SNI enabled, when a client clearly says it's accessing a.com, which has a standalone server block in nginx.conf, why does nginx still require the default server to have a cert/key pair? Why does it even bother with the default server?

Server 2016 - Backup 517 error

Posted: 04 Jun 2022 10:04 AM PDT

For the last week one server is failing backups to a NAS with

The backup operation that started at '‎2017‎-‎12‎-‎17T23:37:47.558318200Z' has failed with following error code '0x807800C5' (There was a failure in preparing the backup image of one of the volumes in the backup set.). Please review the event details for a solution, and then rerun the backup operation once the issue is resolved.  

It creates the shadow copy, backs up the EFI partition, moves on to backing up the C: drive and then fails

Backup_Image

Other servers, plus a whole raft of Sybase databases backup to the same NAS without any problems

I changed WindowsImageBackup to a different name. It created the directory but still fails, getting only as far as

Drive_Image_One

This Saturday (after patch Tuesday) it was rebooted, but still the same. At a loss with this one!

Thank you

how to block incoming DHCP DISCOVERY messages

Posted: 04 Jun 2022 02:00 PM PDT

I'm running ISC DHCP deamon on centOS and want to block unwanted(by clients MAC address) discovery messages before they reach dhcpd.

how can I do this with iptables or anything else?

NS record configuration not working in bind (bind9) on ubuntu

Posted: 04 Jun 2022 03:00 PM PDT

I have following zone file:

testzone.local. IN SOA MyUbuntu hostname (                            6        ; Serial                            3h       ; Refresh after 3 hours                            1h       ; Retry after 1 hour                            1w       ; Expire after 1 week                            1h )     ; Negative caching TTL of 1 hour      ;----------------  records --------------    b._dns-sd._udp          IN      PTR     device-discovery  lb._dns-sd._udp         IN      PTR     device-discovery  device-discovery        IN      NS      test-server-host  test-server-host        IN      A       192.168.1.10    ;--------------- End of records ---------       ;  ; Name servers    ;  @ IN NS MyUbuntu  MyUbuntu IN      A       192.168.1.28  devicemachine  IN  A     192.168.1.10  

Dig gives answers for b._dns-sd._udp.testzone.local, lb._dns-sd._upd.testzone.local and test-server-host but not for device-discovery.testzone.local. Following is the result when I do a dig for device-discovery.testzone.local NS

kg@MyUbuntu:~$ dig @localhost NS device-discovery.testzone.local    ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost NS device-discovery.testzone.local  ; (1 server found)  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 13783  ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;device-discovery.testzone.local. IN    NS    ;; Query time: 3 msec  ;; SERVER: 127.0.0.1#53(127.0.0.1)  ;; WHEN: Fri Feb 17 07:35:27 AEDT 2017  ;; MSG SIZE  rcvd: 64  

Is there anything wrong in the zone file? I saw lots of examples on web and I was expecting it to work.

Thanks a lot.

UPDATED ZONE FILE A BIT, STILL NOT WORKING

kg@MyUbuntu:/etc/bind/zones$ cat db.testzone.local   testzone.local. IN SOA MyUbuntu hostname (                            9        ; Serial                            3h       ; Refresh after 3 hours                            1h       ; Retry after 1 hour                            1w       ; Expire after 1 week                            1h )     ; Negative caching TTL of 1 hour    testzone.local. IN NS MyUbuntu.testzone.local.  testzone.local. IN NS hostname.testzone.local.      b._dns-sd._udp          IN  PTR     device-discovery  lb._dns-sd._udp         IN  PTR     device-discovery  device-discovery    IN  NS      test-server-host  test-server-host    IN  A       192.168.1.10      ;  ; Name servers    ;  @ IN NS MyUbuntu  MyUbuntu    IN  A   192.168.1.28  hostname IN A 192.168.1.28    kg@ MyUbuntu:/etc/bind/zones$ named-checkzone testzone.local db.testzone.local   db.testzone.local:1: no TTL specified; using SOA MINTTL instead  zone testzone.local/IN: loaded serial 9  OK  kg@ MyUbuntu:/etc/bind/zones$ sudo service bind9 restart  kg@ MyUbuntu:/etc/bind/zones$ sudo service bind9 status  ● bind9.service - BIND Domain Name Server     Loaded: loaded (/lib/systemd/system/bind9.service; enabled; vendor preset: enabl     Active: active (running) since Fri 2017-02-17 11:55:59 AEDT; 5s ago       Docs: man:named(8)    Process: 5402 ExecStop=/usr/sbin/rndc stop (code=exited, status=0/SUCCESS)   Main PID: 5407 (named)      Tasks: 5 (limit: 9830)     CGroup: /system.slice/bind9.service             └─5407 /usr/sbin/named -f -u bind    Feb 17 11:55:59 MyUbuntu named[5407]: managed-keys-zone: loaded serial 41  Feb 17 11:55:59 MyUbuntu named[5407]: zone 0.in-addr.arpa/IN: loaded serial   Feb 17 11:56:00 MyUbuntu named[5407]: /etc/bind/zones/db.testzone.local:1: n  Feb 17 11:56:00 MyUbuntu named[5407]: zone testzone.local/IN: loaded serial   Feb 17 11:56:00 MyUbuntu named[5407]: zone localhost/IN: loaded serial 2  Feb 17 11:56:00 MyUbuntu named[5407]: zone 127.in-addr.arpa/IN: loaded seria  Feb 17 11:56:00 MyUbuntu named[5407]: zone 255.in-addr.arpa/IN: loaded seria  Feb 17 11:56:00 MyUbuntu named[5407]: all zones loaded  Feb 17 11:56:00 MyUbuntu named[5407]: running  Feb 17 11:56:00 MyUbuntu named[5407]: zone testzone.local/IN: sending notifi  kg@ MyUbuntu:/etc/bind/zones$ dig @localhost device-discovery.testzone.local. NS    ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost device-discovery.testzone.local. NS  ; (1 server found)  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 52820  ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;device-discovery.testzone.local. IN    NS    ;; Query time: 15 msec  ;; SERVER: 127.0.0.1#53(127.0.0.1)  ;; WHEN: Fri Feb 17 11:57:02 AEDT 2017  ;; MSG SIZE  rcvd: 64  

Iis it possible to lock a file in Linux so it can't be read if another process has it open for writing?

Posted: 04 Jun 2022 11:07 AM PDT

We all know that Linux/Unix will automatically block attempts to write to a file that currently open for writing by another process. But is it possible to block/lock read access to a file if another process has it open for writing? I've got two different scripts, both start at random times and run various times during the day: one script overwrites a particular file; the other one reads from that file. I want the second one to block(wait) or fail if the first one has the file open.

OpenVPN server and client configuration for openwrt

Posted: 04 Jun 2022 02:00 PM PDT

I have an OpenWRT router with WAN(eth1) and LAN(eth0) interface.

I wanted to install an openVPN server on this router, so I could access from the WAN side into the LAN side.

My LAN(eth0) has a DHCP server configured over, and has a static IP of 172.20.51.61/24. When I connect any PC to LAN(eth0) it gets some IP in this range 172.20.51.100/24 to 172.20.51.150/24.

I have installed openvpn easy rsa and other required utilities and also generated keys for client and server. I have copied those keys to client as well.

What I would want is a very basic steps and configuration to gain some confidence, to establish a connection from client to server.

The net based blogs mention so much extra information..it becomes difficult for networking beginners to follow.

Edit: the /etc/easy-rsa/vars file has following contents.

export EASY_RSA="/etc/easy-rsa"  export OPENSSL="openssl"            export PKCS11TOOL="pkcs11-tool"  export GREP="grep"    export KEY_CONFIG=`/usr/sbin/whichopensslcnf $EASY_RSA`  export KEY_DIR="$EASY_RSA/keys"  et PKCS11_MODULE_PATH="dummy"                                            export PKCS11_PIN="dummy"  export KEY_SIZE=2048  export CA_EXPIRE=3650  export KEY_EXPIRE=3650  export KEY_COUNTRY="IN"                                                  export KEY_PROVINCE="MH"                                                 export KEY_CITY="Pune"                                                   export KEY_ORG="My Org"                                                                export KEY_EMAIL="me@myhost.mydomain"                                                         export KEY_OU="MyOrganizationalUnit"  export KEY_NAME="EasyRSA"  # PKCS11 Smart Card                                                      # export PKCS11_MODULE_PATH="/usr/lib/changeme.so"                       # export PKCS11_PIN=1234                                                   # If you'd like to sign all keys with the same Common Name,   # uncomment the KEY_CN export below  # You will also need to make sure your OpenVPN server config   # has the duplicate-cn option set   # export KEY_CN="CommonName"  

Edit 2: I have given WAN a static IP 192.168.18.100/24 which a VPN client with IP 192.168.18.101/24 will try to access.

The server configuration is:

package openvpn  config 'openvpn' 'samplevpnconfig'          option 'enable' '1'          option 'port' '1194'          option 'proto' 'udp'          option 'dev' 'tun0'            option 'client_to_client' '1'          option 'keepalive' '10 120'          option 'comp_lzo' '1'          option 'persist_key' '1'          option 'persist_tun' '1'          option 'verb' '3'          option 'mute' '20'          option 'ifconfig_pool_persist' '/tmp/ipp.txt'          option 'status' '/tmp/openvpn-status.log'          option 'ca' '/etc/openvpn/ca.crt'          option 'cert' '/etc/openvpn/server.crt'          option 'key' '/etc/openvpn/server.key'          option 'dh' '/etc/openvpn/dh1024.pem'            #this should be on a completely different subnet than your LAN          option 'server' '192.168.18.100 255.255.255.0'          list 'push' 'route 172.20.51.61 255.255.255.0' #this should MATCH your current LAN info          list 'push' 'dhcp-option DNS 172.20.51.1' #this should MATCH your current LAN info          list 'push' 'dhcp-option DOMAIN 172.20.51.1' #this should MATCH your current LAN info  

Where 172.20.51.61 is LAN (eth0) static IP address. Honestly I have no idea what those non obvious parameters mean.

The client configuration in Client machine /etc/openvpn/client.conf

client  proto udp  dev tun0    remote 192.168.18.100 1194  resolv-retry infinite  nobind  persist-key  persist-tun  verb 3  comp-lzo  ca /etc/openvpn/ca.crt  cert /etc/openvpn/myclient.crt  key /etc/openvpn/myclient.key  

WHen I try the following command: #openvpn client.conf

I get TLS error: (check your network connectivity)

and tcpdump on server gives:

13:57:19.026056 IP 192.168.18.101.34212 > 192.168.18.100.openvpn: UDP, length 14  13:57:19.026147 IP 192.168.18.100 > 192.168.18.101: ICMP 192.168.18.100 udp port openvpn unreachable, len0  

But connectivity seems proper(ping, tcpdump show packet comming)

Extract correct format of key and cert files from .p12 file for a SSL MEAN App?

Posted: 04 Jun 2022 01:00 PM PDT

I'm writing a MEAN app for my university department on my Mac development machine. I started with the boilerplate from mean.io. In the configuration file config/env/all.js, there is a section for configuring SSL:

https: {      port: false,        // Paths to key and cert as string      ssl: {        key: '',        cert: ''      }  }  

I understand that I'm supposed to change port:false to the port I'm using for SSL, port:443, and to provide the filepaths to the key and cert.

The steps I've taken:

  • create directory ssl in the app root
  • open Keychain Access on mac
  • under Certificates on the left, I right click on my department's CA-issued (DigiCert) Certificate and click on Export
  • Save as Certificates.p12 in app_root/ssl
  • Click OK when prompted to enter a password to protect exported file to skip doing so

Here's where I'm confused. Based on instructions found here the paths are to the public key and certificate. How do I extract the public key from this Certificates.p12 file, and how do I extract the certificate from it such that the mean app will correctly load it?

Additional steps I've taken, attempting instructions found here:

public key generation

  • cd ssl
  • openssl pkcs12 -in Certificates.p12 -nocerts -nodes | openssl rsa > id_rsa
  • openssl rsa -in id_rsa -pubout > pubkey.txt

certificate generation

  • openssl pkcs12 -in Certificates.p12 -clcerts -nokeys -out pubcert.txt

Then I set the following:

https: {      port: 443,        // Paths to key and cert as string      ssl: {        key: rootPath + '/ssl/pubkey.txt',        cert: rootPath + '/ssl/pubcert.txt'      }  }  

On trying to run the server, I get the following error:

crypto.js:100        c.context.setKey(options.key);              ^  Error: error:0906D06C:PEM routines:PEM_read_bio:no start line      at Object.exports.createCredentials (crypto.js:100:17)      at Server (tls.js:1127:28)      at new Server (https.js:35:14)      at Object.exports.createServer (https.js:54:10)  

Any help would be amazing!

Supervisord (exit status 1; not expected) centos python

Posted: 04 Jun 2022 11:01 AM PDT

Ran into additional issue with Supervisord.

Centos 6.5 supervisor python 2.6 installed with the OS python 2.7 installed in /usr/local/bin

supervisord program settings

[program:inf_svr]  process_name=inf_svr%(process_num)s  directory=/opt/inf_api/  environment=USER=root,PYTHONPATH=/usr/local/bin/  command=python2.7 /opt/inf_api/inf_server.py --port=%(process_num)s  startsecs=2  user=root  autostart=true  autorestart=true  numprocs=4  numprocs_start=8080  stderr_logfile = /var/log/supervisord/tornado-stderr.log  stdout_logfile = /var/log/supervisord/tornado-stdout.log  

I can run inf_server.py with:

python2.7 inf_server.py --port=8080  

with no problems. I made sure the files were executable (that was my problem before).

Any thoughts?

UPDATE: I cant get it to even launch a basic python script without failing. Started by commenting out the old program, adding a new one and then putting in:

command=python /opt/inf_api/test.py  

where test.py just writes something to the screen and to a file. Fails with exit status 0. So I started adding back in the location of python (after discovering it with 'which python')

environment=PYTHONPATH=/usr/bin  

Tried putting the path in single quote, tried adding USER=root, to the environment, tried adding

directory=opt/inf_api/  

tried adding

user=root  

All the same thing, exit status 0. Nothing seems to added to any log files either, except what Im seeing from the debug of supervisord.

Man I am at a loss.

UPDATE 2: So there is absolutely NOTHING in the stderr and std out logs for this. It just fails with (Exit Status 1; not expected) no errors from the program I am trying to run, nothing...

2014-03-05 11:25:01,027 INFO spawned: 'inf_svr' with pid 15542  2014-03-05 11:25:01,315 INFO exited: inf_svr (exit status 1; not expected)  2014-03-05 11:25:01,315 INFO received SIGCLD indicating a child quit  

Update 3: I have a simple python script started and managed by supervisord using /usr/local/bin/python 2.7 so at least I know it can invoke using the different version of python passed.

However I am still getting an (error status 1, not expected) on the original script.

I added in some exception logging, and even just some print to file lines at the beginning of the script. Its like it never even makes it there, fails before it even launches.

iptables POSTROUTING not working in local area

Posted: 04 Jun 2022 03:00 PM PDT

I used command

iptables -t nat -I POSTROUTING -o $INTERFACE -p tcp  -j SNAT --to-source $IP  

to make my server packets visible as $IP. But the problem is, that it didn't work inside local area, so when I'm sending something to address 10.X.X.X, then I'm not visible as $IP, but as older IP.

EDIT: There are 3 interfaces:

auto eth0  iface eth0 inet static      address 46.X.X.152      netmask 255.255.255.0      network 46.X.X.0      broadcast 46.X.X.255      gateway 46.X.X.254    post-up /sbin/ifconfig eth0:0 178.X.X.28 netmask 255.255.255.255 broadcast 178.X.X.28  post-down /sbin/ifconfig eth0:0 down  post-up /sbin/ifconfig eth0:1 178.X.X.27 netmask 255.255.255.255 broadcast 178.X.X.27  post-down /sbin/ifconfig eth0:1 down  

And in my iptables rule $INTERFACE=eth0 and $IP is 178.X.X.28 (form eth0:0)

https only connects in local using Apache

Posted: 04 Jun 2022 12:00 PM PDT

I'have configured apache to use ssl certificates, when I type https://mysite.com it connects on local, but when I try outside of the server, it doesn't connect.

Here are my config and other details, I hope some one can help me.

Firewall exception for port 443 is enabled.

OS :

windows server 2008 R2

Web server:

AppServ 2.5.10(Apache 2.2.8,PHP 5.2.6,MySQL 5.0.51b,phpMyAdmin-2.10.3)

Note: I have purchased the domain at godaddy.com , when I type, www.mysite or mysite.com it works good(it have been working good since 2 years ago)

httpd.conf file

Added :    Listen 443    uncomment:    LoadModule ssl_module modules/mod_ssl.so    added:    <VirtualHost 23.23.95.84:80>  DocumentRoot  C:/AppServ/www/SaresInterno   ServerName staging.soluntech.com   SSLEngine on  SSLCertificateFile C:/AppServ/Apache2.2/conf/certificate.crt    SSLCertificateKeyFile C:/AppServ/Apache2.2/conf/ssl.key   SSLCertificateChainFile C:/AppServ/Apache2.2/conf/sub.class1.server.ca.pem  SSLCACertificateFile C:/AppServ/Apache2.2/conf/ca.pem  SSLProtocol all -SSLv2  SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM  SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown  CustomLog  C:/AppServ/Apache2.2/logs/ssl_request_log \  "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"  </VirtualHost>    <VirtualHost _default_:443>  DocumentRoot  C:/AppServ/www/SaresInterno   ServerName staging.soluntech.com   SSLEngine on  SSLCertificateFile C:/AppServ/Apache2.2/conf/certificate.crt    SSLCertificateKeyFile C:/AppServ/Apache2.2/conf/ssl.key   SSLCertificateChainFile C:/AppServ/Apache2.2/conf/sub.class1.server.ca.pem  SSLCACertificateFile C:/AppServ/Apache2.2/conf/ca.pem   SSLProtocol all -SSLv2     SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM     SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown     CustomLog  C:/AppServ/Apache2.2/logs/ssl_request_log \        "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"  </VirtualHost>  

Where is my mysql log on OS X?

Posted: 04 Jun 2022 03:23 PM PDT

I checked /var/log and /usr/local/mysql and i can't seem to find the log. I am trying to troubleshoot an error establishing a database connection with a php function.

No comments:

Post a Comment