Sunday, April 18, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Docker Nginx returning 404 on install.php

Posted: 18 Apr 2021 09:53 PM PDT

Let me first describe the server structure. My Wordpress is hosted inside a docker container along with nginx on lets say Server B (https://serverB.com). But I am trying to access the Wordpress site through (https://serverA.com/blogs). Now, if configure WP_HOME to https://serverB.com, everything runs smoothly, I can install Wordpress and everything. But if I change the WP_HOME to https://serverA.com/blogs, all of a sudden I am getting 404 - Not Found Error. (I downed the docker containers and deleted the volume).

I added the following line on wp-config.php as well.

$_SERVER['REQUEST_URI'] = '/blogs' . $_SERVER['REQUEST_URI'];  

404 - Not Found error is receiving on docker's nginx. That means the request has travelled all the way to the docker's nginx and then either it does not know what's happening or where's the file...

Error message from docker logs:

webserver_test | 192.168.192.1 - - [19/Apr/2021:04:44:18 +0000] "GET /blogs/wp-login.php HTTP/1.0" 404 556 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"  

docker-compose.yml file

version: '3'      services:    db:      image: mysql:8.0      container_name: db_test      restart: unless-stopped      env_file: .env      environment:        - MYSQL_DATABASE=wordpress      volumes:        - mysql_db_data_test:/var/lib/mysql      command: '--default-authentication-plugin=mysql_native_password'      networks:        - app-network      wordpress:      depends_on:        - db      image:  wordpress:5.1.1-fpm-alpine      container_name: wordpress_test      restart: unless-stopped      env_file: .env      environment:        - WORDPRESS_DB_HOST=db:3306        - WORDPRESS_DB_USER=$MYSQL_USER        - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD         - WORDPRESS_DB_NAME=wordpress        - WORDPRESS_CONFIG_EXTRA=            define('WP_HOME','https://serverA.com/blogs/');            define('WP_SITEURL', 'https://serverA.com/blogs/');            define( 'WP_DEBUG', true );      volumes:        - wordpress_data_test:/var/www/html      networks:        - app-network        webserver:      depends_on:        - wordpress      image: nginx:1.15.12-alpine      container_name: webserver_test      restart: unless-stopped      ports:        - "7070:80"      volumes:        - wordpress_data_test:/var/www/html        - ./nginx-conf:/etc/nginx/conf.d      networks:        - app-network      volumes:    wordpress_data_test:    mysql_db_data_test:      networks:    app-network:      driver: bridge  

Please let me know if I have to add any more information. Thank you

Start getty on pty

Posted: 18 Apr 2021 09:25 PM PDT

I'm trying to write a ascii to baudot converter (for a teletype, obviously with dropped characters) using pseudoterminals. The idea is to have a pty master/slave pair, write to the slave, read from the master, convert ascii to baudot, and send to teletype. Input from the teletype will be read in, converted from baudot to ascii, sent to the master, and processed by the slave.

I can do this with a direct serial connection (screen /dev/pts/x), but agetty doesn't seem to work. I'm monitoring the master with

import pty  import os    m, s = pty.openpty()  print(s, os.readlink(os.path.join('/proc/self/fd', str(s))))    while True:      data = os.read(m, 1)      print(data)  

and can see data sent through screen, but not agetty. The agetty command I'm using is (where /dev/pts/2 is the slave file)

sudo /sbin/agetty 9600 /dev/pts/2  

How do I start a getty on a pseudoterminal? Does getty look for a special response character (similar to a modem) before sending data? The getty process is not sending any data to my pseudoterminal.

Best MySQL/MariaDB scaling strategy for shared hosting provider

Posted: 18 Apr 2021 09:56 PM PDT

As a small shared hosting service provider our MariaDB server is going to face a hardware limit very soon. So we want to add another machine for another MariaDB instance.

Our goals are:

  1. Reduce disk/cpu/ram usage on current machine.
  2. By reducing resources usage(1), there will be more room for current users/tables in this machine.
  3. Easily scale to more machines in the future.
  4. Our customers should not notice anything at all. They should not forced to change configurations of their softwares.

What I am thinking of is something like an instance which works as something like a proxy, which also knows that which database is on which instance and then automatically redirect queries to that instance, then receives and forwards the to the client.

Here are my Question:

Is this possible? What is it's technical name and how can we implement it in MySQL? Is there any better way to fulfill our goals? Bests,

Glassfish : Class name is wrong or classpath is not set for : com.mysql.cj.jdbc.Driver Please check the server.log for more details

Posted: 18 Apr 2021 07:09 PM PDT

I build JAVA EE project and choose glassfish as a server and mysql as a database, when i trying integrate mysql database in glassfish server, there are some errors : I fill properties database like name , server , PortNumber .. etc. when I test connection by press on ping button , this message displayed

An error has occurred Ping Connection Pool failed for DemoDatabase.  Class name is wrong or classpath is not set for : com.mysql.cj.jdbc.Driver   Please check the server.log for more details. An error has occurred      Ping Connection Pool failed for DemoDatabase.   Class name is wrong or classpath is not set for : com.mysql.cj.jdbc.Driver Please check the server.log for more details.  

this message in Server.log

Cannot find poolName in ping-connection-pool command model, file a bug\ninjection failed on org.glassfish.connectors.admin.cli.PingConnectionPool.poolName with class java.lang.String  

Windows Service running on Windows 10 to persistently connect to network share from comp running windows 7 - no domain

Posted: 18 Apr 2021 07:01 PM PDT

I'm trying to write a Windows service that will persistently connect and pull files from a network share on a windows 7 computer. Both computers are on a private network and the network share has read permissions set to "Everyone" and write permissions set to administrators. Neither computer is on a domain.

I'm able to access the network share through the GUI without entering a username or password. However, when I use the UNC path in a windows service running as a network service, it says the network UNC path doesn't exist. I've also tried to create a user on the Windows 10 computer with the same credentials as a non-administrative user on the windows 7 computer (as suggested here) with no luck there either.

Does anybody know how to accomplish this?

How to create a docker container that simply forwards a range of ports to another, external IP address?

Posted: 18 Apr 2021 06:36 PM PDT

I would like to create a docker container that all it does is forward any connection on a range of IP addresses to another host, at the same port.

I've looked at iptables, pipeworks and haproxy and they look rather complex.

socat and redir look like they could do what I want, but they don't take a port range.

FROM ubuntu:latest    ENV DEST_IP=8.8.8.8 \      PORTS=3000-9999    RUN apt update && apt install -y ????    CMD ???? --source 0.0.0.0 --ports ${RANGE} --dest ${DEST_IP}  

I am using Gmail with a vanity account alias and external SMTP server (AWS SES). My first email is not being threaded

Posted: 18 Apr 2021 07:00 PM PDT

So this is a minor inconvenience but I am curious if anyone well versed in email forwarding and Gmail can help me.

I have a vanity domain I'll call code@coder.dev. I am using it as an alias for my personal gmail code@gmail.com.

  • I compose the initial message, message_id=A, in Gmail, and send it through AWS SES SMTP.
  • AWS SES creates its own message_id=B and sends it to end user (stranger@gmail.com)
  • stranger@gmail.com replies with message_id=C, and sends it to AWS SES. It also sets References: B
  • My email forwarding Lambda forwards the message to me (code@gmail.com), with message_id=D.
  • Gmail does not show A in the same thread as D, on my end (code@gmail.com)

Note that if I reply to D, and stranger replies, and I reply back, etc. etc., all those messages are threaded together. Because at this point, we are building up the References: list with ids we have both seen before. It's only A that is left out.

What's funny/sad is message A also contains X-Gmail-Original-Message-ID: A, and that makes it to stranger@gmail.com, but then stranger doesn't send that header back in message C or use it in the References: list. Google doesn't know how to talk to itself :|

Also see https://workspaceupdates.googleblog.com/2019/03/threading-changes-in-gmail-conversation-view.html

How to Trace Who was Using my Mail Relay on Spamming?

Posted: 18 Apr 2021 04:21 PM PDT

I have a Postfix mail relay server running as Exchange smarthost as well as hosting another mail locally.

Last week I observed an attack on this server, someone is using it to send massive emails to different destinations.

I can't find out where it is connected from and the "from" address is also masked.

Below is the mail logs:

Apr 16 06:29:10 mail.xxx.com postfix/qmgr[25497]: EC5A91D727: from=<>, size=3096, nrcpt=1 (queue active)  Apr 16 06:29:10 mail.xxx.com postfix/bounce[12183]: B37D31D6FA: sender non-delivery notification: EC5A91D727  Apr 16 06:29:10 mail.xxx.com postfix/qmgr[25497]: B37D31D6FA: removed  Apr 16 06:29:11 mail.xxx.com postfix/smtp[12164]: 1A9B71D801: to=<xxx@inver**.com>, relay=inver**.com[164.138.x.x]:25, delay=50, delays=39/0/6.7/5, dsn=2.0.0, status=sent (250 OK id=1lX6jh-000875-TC)  Apr 16 06:29:11 mail.xxx.com postfix/qmgr[25497]: 1A9B71D801: removed  Apr 16 06:29:11 mail.xxx.com postfix/smtp[11990]: 3BEAB1D9C3: to=<xxx@tms**.pl>, relay=tms**.pl[194.181.x.x]:25, delay=49, delays=37/0/6.7/5.4, dsn=2.0.0, status=sent (250 OK id=1lX6ji-000469-QT)  Apr 16 06:29:11 mail.xxx.com postfix/qmgr[25497]: 3BEAB1D9C3: removed  Apr 16 06:29:12 mail.xxx.com postfix/smtp[12954]: 418621D80D: to=<xxx@medi**.com.cn>, relay=mxw**.com[198.x.x.x]:25, delay=51, delays=38/0/8.5/4.5, dsn=5.0.0, status=bounced (host mxw.mxhichina.com[198.11.189.243] said: 551 virus infected mail rejected (in reply to end of DATA command))  Apr 16 06:29:12 mail.xxx.com postfix/cleanup[7936]: 6711A1D7B7: message-id=<20210415182912.6711A1D7B7@mail.xxx.com>  Apr 16 06:29:12 mail.xxx.com postfix/bounce[12184]: 418621D80D: sender non-delivery notification: 6711A1D7B7  Apr 16 06:29:12 mail.xxx.com postfix/qmgr[25497]: 418621D80D: removed  Apr 16 06:29:12 mail.xxx.com postfix/qmgr[25497]: 6711A1D7B7: from=<>, size=2554, nrcpt=1 (queue active)  Apr 16 06:29:12 mail.xxx.com postfix/smtp[11499]: 65E4C1D95F: to=<xxx@an**.com>, relay=aspmx.l.google.com[172.217.x.x]:25, delay=51, delays=38/0/6.3/6.7, dsn=5.7.0, status=bounced (host aspmx.l.google.com[172.217.194.27] said: 552-5.7.0 This message was blocked because its content presents a potential 552-5.7.0 security issue. Please visit 552-5.7.0  https://support.google.com/mail/?p=BlockedMessage to review our 552 5.7.0 message content and attachment content guidelines. z63si3810735ybh.300 - gsmtp (in reply to end of DATA command))  Apr 16 06:29:12 mail.xxx.com postfix/cleanup[10468]: 705F91D801: message-id=<20210415182912.705F91D801@mail.xxx.com>  Apr 16 06:29:12 mail.xxx.com postfix/smtp[11996]: F05911DBCA: to=<xxx@maq**.ae>, relay=maq**.protection.outlook.com[104.47.x.x]:25, delay=36, delays=27/0/3.1/6, dsn=2.6.0, status=sent (250 2.6.0 <20210415112836.BE31E4C0C57EAA1B@alshirak.com> [InternalId=93338229282509, Hostname=DB8PR10MB2745.EURPRD10.PROD.OUTLOOK.COM] 933811 bytes in 3.322, 274.451 KB/sec Queued mail for delivery)  Apr 16 06:29:12 mail.xxx.com postfix/qmgr[25497]: F05911DBCA: removed  Apr 16 06:29:12 mail.xxx.com postfix/bounce[12183]: 65E4C1D95F: sender non-delivery notification: 705F91D801  Apr 16 06:29:12 mail.xxx.com postfix/qmgr[25497]: 65E4C1D95F: removed  

How to check where is the attack source? Is there a way to limit only a specific range of domains that can be used for mail relay?

I'm not a Postfix professional, so any suggestions/advises would be appreciated.

Trying to force Apache to use only TLSv1.3 on a vhost, but it refuses to disable TLSv1.2

Posted: 18 Apr 2021 04:17 PM PDT

I have a test vhost on my web server for which I'm trying to enforce TLSv1.3-only but Apache refuses to disable TLSv1.2. TLSv1.3 does work however the following validation services all show that TLSv1.2 is still running on my vhost:

https://www.digicert.com/help/

https://www.ssllabs.com/ssltest/

https://www.immuniweb.com/ssl/

I've tried a few different ways including all of the following:

SSLProtocol -all +TLSv1.3  SSLProtocol +all -SSLv3 -TLSv1 -TLSv1.1 -TLSv1.2  SSLProtocol -all -TLSv1.2 +TLSv1.3  SSLProtocol +TLSv1.3  

System info:

Ubuntu 20.04.2 LTS  OpenSSL 1.1.1f  Apache 2.4.41  

Global SSL configuration:

SSLRandomSeed startup builtin  SSLRandomSeed startup file:/dev/urandom 512  SSLRandomSeed connect builtin  SSLRandomSeed connect file:/dev/urandom 512  AddType application/x-x509-ca-cert .crt  AddType application/x-pkcs7-crl .crl  SSLPassPhraseDialog  exec:/usr/share/apache2/ask-for-passphrase  SSLSessionCache shmcb:${APACHE_RUN_DIR}/ssl_scache(512000)  SSLSessionCacheTimeout  300  SSLCipherSuite HIGH:!aNULL  #SSLProtocol all -SSLv3  SSLUseStapling On  SSLStaplingCache "shmcb:${APACHE_RUN_DIR}/ssl_stapling(128000000)"  SSLStaplingResponderTimeout 2  SSLStaplingReturnResponderErrors off  SSLStaplingFakeTryLater off  SSLStaplingStandardCacheTimeout 86400  

vhost configuration:

<VirtualHost XX.XX.XX.XX:443>      ServerName testing.example.com      DocumentRoot "/var/www/test"      ErrorLog ${APACHE_LOG_DIR}/test-error.log      CustomLog ${APACHE_LOG_DIR}/test-access.log combined  #   Include /etc/letsencrypt/options-ssl-apache.conf      SSLEngine on      SSLCompression off      SSLCertificateFile /etc/letsencrypt/live/testing.example.com/fullchain.pem      SSLCertificateKeyFile /etc/letsencrypt/live/testing.example.com/privkey.pem      Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"  #   SSLCipherSuite "HIGH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128"  #   SSLHonorCipherOrder off      SSLProtocol -all +TLSv1.3      SSLOpenSSLConfCmd DHParameters "/etc/ssl/private/dhparams_4096.pem"  </VirtualHost>  

info from "apachectl -S":

root@domain:~# apachectl -S  VirtualHost configuration:  XX.XX.XX.XX:80      is a NameVirtualHost  ...  (irrelevant)  ...  XX.XX.XX.XX:443     is a NameVirtualHost           default server blah.example.com (/etc/apache2/sites-enabled/sites.conf:13)           port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:13)           **port 443 namevhost test.example.com (/etc/apache2/sites-enabled/sites.conf:29)**           port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:54)           port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:93)           port 443 namevhost blah.example.org (/etc/apache2/sites-enabled/sites.conf:111)           port 443 namevhost blah.example.tk (/etc/apache2/sites-enabled/sites.conf:132)           port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:145)  [XX:XX:XX:XX:XX:XX:XX:XX]:80 is a NameVirtualHost  ...  (irrelevant)  ...  [XX:XX:XX:XX:XX:XX:XX:XX]:443 is a NameVirtualHost  ...  (irrelevant; note the subdomain in question only has IPV4 DNS entry no IPV6)  ...  ServerRoot: "/etc/apache2"  Main DocumentRoot: "/var/www/html"  Main ErrorLog: "/var/log/apache2/error.log"  Mutex fcgid-proctbl: using_defaults  Mutex ssl-stapling: using_defaults  Mutex ssl-cache: using_defaults  Mutex default: dir="/var/run/apache2/" mechanism=default  Mutex mpm-accept: using_defaults  Mutex fcgid-pipe: using_defaults  Mutex watchdog-callback: using_defaults  Mutex rewrite-map: using_defaults  Mutex ssl-stapling-refresh: using_defaults  PidFile: "/var/run/apache2/apache2.pid"  Define: DUMP_VHOSTS  Define: DUMP_RUN_CFG  Define: MODPERL2  Define: ENABLE_USR_LIB_CGI_BIN  User: name="www-data" id=33  Group: name="www-data" id=33  root@domain:~#  

I have it commented out of the vhost in question but other vhosts are using a letsencrypt/options-ssl-apache.conf which I'll include here in case it could be interfering somehow:

SSLEngine on  SSLProtocol             all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1  SSLCipherSuite          ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384  SSLHonorCipherOrder     on  SSLSessionTickets       off  SSLOptions +StrictRequire  

Is it possible to receive email for aliases of multiple domains on a single KVM?

Posted: 18 Apr 2021 02:37 PM PDT

I'm pretty new, so please do go easy on me. :)

Tl dr; is it possible to receive email for aliases of multiple domains on a single KVM?

I had a digital ocean server with multiple websites hosted on it, and needed email aliases of more than one of those domains. On several occasions, mail was not delivered, I believe it's possible that this is because the corresponding domain did not use a PTR record (Could be wrong, I'm new, here.)

The PTR records with DO are tied to droplet names, so it seemed impossible to have PTR records for multiple domains, thus I was stuck with incomplete MX records and that may have been the cause of my undelivered mail.

I was thinking, there must be a way around this issue, besides renting another KVM.

please share your advice.

Best regards, Glenn

Why does AWS use host names for their load balancers instead of IP addresses?

Posted: 18 Apr 2021 03:03 PM PDT

I'm getting to know how load balancers work in cloud platforms. I'm specifically talking about load balancers you use to expose multiple backends to the public internet here, not internal load balancers.

I started with GCP, where when you provision a load balancer, you get a single public IP address. Then I learned about AWS, where when you provision a load balancer (or at least, the Elastic Load Balancer), you get a host name (like my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com).

With the single IP, I can set up any DNS records I like. This means I can keep my name servers outside of the cloud platform to set up domains, and I can do DNS challenges for Lets Encrypt because I can set a TXT record for my domain after setting an A record for it. With the host name approach, I have to use ALIAS records (AWS has to track things internally) so I have to use their DNS service (Route 53). This DNS difference is a slight inconvenience for me, because it's not what I'm used to, and if I want to keep my main name servers for my domain outside of AWS, I can. I would just delegate a subdomain of my domain to Route 53's name servers.

So far, this DNS difference is the only consequence of this load balancer architectural difference that I've noticed. Maybe there are more. Is there a reason GCP and AWS may have chosen the approaches they did, from an architecture perspective? Pros and cons?

cookie is lost on refresh using nginx as proxy_reverse. I like the cookie and would like to keep it set in the browser

Posted: 18 Apr 2021 02:53 PM PDT

I'm new to Nginx and Ubuntu - have been with windows server for over a decade and this is my first try to use ubuntu and Nginx so feel free to correct any wrong assumption I write here :)

My setup: I have an expressjs app (node app) running as an upstream server. I have front app - built in svelte - accessing the expressjs/node app through Nginx proxy_reverse. Both ends are using letsencrypt and cors are set as you will see shortly.

When I run front and back apps on localhost, I'm able to login, set two cookies to the browser and all endpoints perform as expected.

When I deployed the apps I ran into weird issue. The cookies are lost once I refresh the login page. Added few flags to my server block but no go.

I'm sure there is a way - I usually find a way - but this issue really beyond my limited knowledge about Nginx and proxy_reverse setup. I'm sure it is easy for some of you but not me. I hope one of you with enough knowledge point me in the right direction or have explanation to how to fix it.

Here is the issue: My frontend is available at travelmoodonline.com. Click on login. Username : mongo@mongo.com and password is 123. Inspect dev tools network. Header and response are all set correctly. Check the cookies tab under network once you login and you will get two cookies, one accesstoken and one refreshtoken.

Refresh the page. Poof. Tokens are gone. I no longer know anything about the user. Stateless.

In localhost, I refresh and the cookies still there once I set them. In Nginx as proxy, I'm not sure what happens.

So my question is : How to fix it so cookies are set and sent with every req? Why the cookies disappear? Is it still there in memory somewhere? Is the path wrong? Or the cockies are deleted once I leave the page so if I redirect the user after login to another page, the cookies are not showing in dev tools.

My code : node/expressjs server route code to login user:

app.post('/login',  (req, res)=>{     //get form data and create cookies     res.cookie("accesstoken", accessToken, { sameSite: 'none', secure : true });       res.cookie("refreshtoken", refreshtoken, { sameSite: 'none', secure : true }).json({      "loginStatus": true, "loginMessage": "vavoom : doc._id })           }  

Frontend - svelte - fetch route with a form to collect username and password and submit it to server:

    function loginform(event){    username = event.target.username.value;    passwordvalue = event.target.password.value;      console.log("event username: ", username);    console.log("event password : ", passwordvalue);      async function asyncit (){         let response = await fetch('https://www.foodmoodonline.com/login',{    method: 'POST',    origin : 'https://www.travelmoodonline.com',    credentials : 'include',    headers: {    'Accept': 'application/json',    'Content-type' : 'application/json'    },    body: JSON.stringify({    //username and password    })      }) //fetch  

Now my Nginx server blocks :

# Default server configuration  #  server {            listen 80 default_server;      listen [::]:80 default_server;          root /var/www/defaultdir;      index index.html index.htm index.nginx-debian.html;        server_name _;       location / {          try_files $uri $uri/ /index.html;      }       }        #  port 80 with www    server {      listen 80;      listen [::]:80;          server_name www.travelmoodonline.com;        root /var/www/travelmoodonline.com;        index index.html;        location / {          try_files $uri $uri/ /index.html;      }        return 308 https://www.travelmoodonline.com$request_uri;     }    #  port 80 without wwww  server {      listen 80;      listen [::]:80;        server_name travelmoodonline.com;        root /var/www/travelmoodonline.com;         index index.html;        location / {          try_files $uri $uri/ /index.html;      }        return 308 https://www.travelmoodonline.com$request_uri;  }        # HTTPS server (with www) port 443 with www    server {      listen 443 ssl;      listen [::]:443 ssl;      add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";      server_name www.travelmoodonline.com;          root /var/www/travelmoodonline.com;      index index.html;                            ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot        location / {          try_files $uri $uri/ /index.html;             }          }      # HTTPS server (without www)   server {      listen 443 ssl;      listen [::]:443 ssl;       add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";      server_name travelmoodonline.com;      root /var/www/travelmoodonline.com;      index index.html;             location / {          try_files $uri $uri/ /index.html;             }            ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot             }              server {        server_name foodmoodonline.com www.foodmoodonline.com;    #   localhost settings      location / {          proxy_pass http://localhost:3000;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection 'upgrade';          proxy_set_header Host $host;          proxy_cache_bypass $http_upgrade;              #    proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";      #   proxy_pass_header  localhost;        #    proxy_pass_header Set-Cookie;      #    proxy_cookie_domain localhost $host;      #   proxy_cookie_path /;         }        listen [::]:443 ssl; # managed by Certbot      listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/foodmoodonline.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/foodmoodonline.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot  }    server {      if ($host = www.foodmoodonline.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          if ($host = foodmoodonline.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen 80;      listen [::]:80;      server_name foodmoodonline.com www.foodmoodonline.com;      return 404; # managed by Certbot    }  

I tried 301-302-307 and 308 after reading about some of them covers the GET and not POST but didn't change the behavior I described above. Why the cookie doesn't set/stay in the browser once it shows in the dev tools. Should I use rewrite instead of redirect???? I'm lost.

Not sure is it nginx proxy_reverse settings I'm not aware of or is it server block settings or the ssl redirect causing the browser to loose the cookies but once you set the cookie, the browser suppose to send it with each req. What is going on here?

Thank you for reading.

Hard limiting monthly egress to prevent charges

Posted: 18 Apr 2021 02:59 PM PDT

I've got a Google Cloud VM, which I'm currently running on the free tier. This gives me 1GB free egress per month before I start getting charged.

Because of this, I want to hard limit the egress of the VM to never exceed this cap in a given month.

After searching for how to do this for a while, every piece of info seems to be generalised traffic shaping to limit peak bandwidth, rather than setting monthly limits.

Eventually I stumbled across this guide, which implies what I want to do is possible with tc. However, this particular use case doesn't suit my needs as the bandwidth limits reset at the start of the calendar month and this seems to be a rolling limiter.

Ideally, I would like this to work in two tiers. The first is 900MB of carefree usage per calendar month, which can be used as quickly or as slowly as is needed. Once that has been used, the remaining 100MB should be allocated as is described in the guide linked above, accumulating in the bucket. Then, at the end of the calendar month, all limits are reset.

Is there a simple way to go about this?

Annoyingly GCP doesn't have the ability to monitor the cumulative egress and set an alert, the best I can do is set a budget alert once I've been charged. I'd ideally like to stop this from happening before, rather than after I'm charged.

Kubernetes - Nginx Frontend & Django Backend

Posted: 18 Apr 2021 06:52 PM PDT

I'm following along this doc: https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/ the mainly difference is that my app is a Djago app running on port 8000.

The frontend pod keeps crashing:

2021/03/15 00:15:52 [emerg] 1#1: host not found in upstream "posi:8000" in /etc/nginx/conf.d/posi.conf:2  nginx: [emerg] host not found in upstream "posi:8000" in /etc/nginx/conf.d/posi.conf:2  

Could someone point out my mistakes, please.

deploy_backend.yaml

apiVersion: apps/v1  kind: Deployment  metadata:    name: posi-backend  spec:    selector:      matchLabels:        app: posi        tier: backend        track: stable    replicas: 2    template:      metadata:        labels:          app: posi          tier: backend          track: stable      spec:        containers:        - name: posi-backend          image: xxxxxxxxxx.yyy.ecr.us-east-1.amazonaws.com/posi/posi:uwsgi          command: ["uwsgi"]          args: ["--ini", "config/uwsgi.ini"]          ports:          - name: http            containerPort: 8000        imagePullSecrets:          - name: regcred  

service_backend.yaml

 apiVersion: v1   kind: Service   metadata:     name: posi-backend   spec:     selector:       app: posi       tier: backend     ports:     - protocol: TCP       port: 8000       targetPort: 8000  

deploy_frontend.yaml

 apiVersion: apps/v1   kind: Deployment   metadata:     name: posi-frontend   spec:     selector:       matchLabels:         app: posi         tier: frontend         track: stable     replicas: 1     template:       metadata:         labels:           app: posi           tier: frontend           track: stable       spec:         containers:           - name: posi-frontend             image: alx/urn_front:1.0             lifecycle:               preStop:                 exec:                   command: ["/usr/sbin/nginx", "-s", "quit"]  

service_frontend.yaml

 apiVersion: v1   kind: Service   metadata:      name: posi-frontend   spec:      selector:       app: posi       tier: frontend     ports:     - protocol: "TCP"       port: 80       targetPort: 80     type: NodePort  

nginx.conf

upstream posi_backend {      server posi:8000;  }  server {      listen 80;      server_name qa-posi;       location /static/ {           alias /code/static/;       }      location / {          proxy_pass http://posi_backend;          include     /etc/nginx/uwsgi_params;      }  }  
  • Cluster information:

  • Kubernetes version: 1.20
  • Cloud being used: bare-metal
  • Installation method: Kubeadm
  • Host OS: Centos 7.9
  • CNI and version: Weave 0.3.0
  • CRI and version: Docker 19.03.11

How to Utilize NIC Teaming for Hyper-V VMs with VLAN in Windows Server 2016

Posted: 18 Apr 2021 01:49 PM PDT

I have two Windows Server 2016 with Hyper-V installed. Each server has two ethernet adapters. And each Hyper-V has several VMs. My goal is VMs can communicate with each other if they fall into the same VLAN.

In order to make the network connection redundancy, I created the network teaming on the physical machine. The teaming is using "Switch Independent" with "Address Hash" options. On the Virtual Switch Manager, I created an external adapter by selecting the teamed adapter (Microsoft Network Adapter Multiplexor Driver).

Under each VM, I create a virtual adapter with VLAN tagged.

However, the VMs in the same VLAN cannot communicate with each other.

On the switch side, I have already configured trunk mode for all the ports connected with the physical machines.

If I remove the teaming, the VMs can communicate with VLAN tags. How to address this issue?

Determine which TLS version is used by default (cURL)

Posted: 18 Apr 2021 06:49 PM PDT

I have 2 servers that both run curl 7.29.0 and CentOS 7. However, when I run the following command I get 2 different results:

curl https://tlstest.paypal.com  
  1. PayPal_Connection_OK
  2. ERROR! Connection is using TLS version lesser than 1.2. Please use TLS1.2

Why does one server default to a TLSv1.2 connection and the other does not?

(I know I can force it with --tlsv1.2 if I wanted to)

Automatic profile configuration in Outlook 2016 / Outlook 365

Posted: 18 Apr 2021 02:49 PM PDT

I'm in the process of reconfiguring Outlook 2016 clients with an Exchange 365 backend. The majority of my users need access to one or more shared mailboxes to receive and to send e-mail. Using the default option to give these users full mailbox access to the shared mailboxes, that is easily and automatically accomplished. With some tweaking (Set-MailboxSentItemsConfiguration), I can even have a copy of send items stored in the send items folder of the shared mailbox, so everyone is up to date of what is being send. Nice.

But I also need to have seperate signatures for all mailboxes and I also need to be able to configure different local cache period settings. For the primary mailbox I need to keep a local copy of about 6 months (for fast searching), but for the shared mailboxes one month would do. This keeps the local .ost files a lot smaller, compared to the scenario where all shared mailboxes have the same cache period.

The only way I know how to accomplish this, is by using extra Outlook accounts instead of using extra Outlook mailboxes. Now I need the find a way to add the extra accounts automatically to the Outlook profile. In the pre Exchange 365 era, I would have used Microsoft's Office Customization Tool to create a basic .prf file, use VBscript to find the shared mailboxes the current user has access to and add these to the .prf profile. Have the user start Outlook with the /importprf switch, and voila.

But now I'm already stuck at creating the .prf file with the OCT. What to use for Exchange Server name? This weird guid you find after manually configuring Outlook with Exchange 365? Maybe the OCT is not the best option. I also found a PowerShell tool called PowerMAPI (http://powermapi.com) but it's hard to find out if this works with Exchange 365. The same goes for Outlook Redemption (http://www.dimastr.com/redemption/home.htm). Does anyone have experience with these tools? Or am I making this far more complicated than needed? I'm open to all suggestions...

Why are "Request header read timeout" messages in error log when page loads are short?

Posted: 18 Apr 2021 01:49 PM PDT

I am running a Rails application with Apache 2.4.10 and mod_passenger. The site uses https exclusively. I am seeing these messages in my error log:

[Wed May 31 19:05:37.528070 2017] [reqtimeout:info] [pid 11111] [client 10.100.23.2:57286] AH01382: Request header read timeout    [Wed May 31 19:05:37.530672 2017] [reqtimeout:info] [pid 11229] [client 10.100.23.2:57288] AH01382: Request header read timeout    [Wed May 31 19:05:37.890259 2017] [reqtimeout:info] [pid 10860] [client 10.100.23.2:57271] AH01382: Request header read timeout    [Wed May 31 19:05:37.890383 2017] [reqtimeout:info] [pid 10859] [client 10.100.23.2:57272] AH01382: Request header read timeout    [Wed May 31 19:05:37.890459 2017] [reqtimeout:info] [pid 10862] [client 10.100.23.2:57285] AH01382: Request header read timeout    [Wed May 31 19:05:37.947942 2017] [reqtimeout:info] [pid 10863] [client 10.100.23.2:57287] AH01382: Request header read timeout   

These messages appear in the error log about two to three seconds after the end of my page load. However, the complete page load takes only a few seconds. I am using mod_reqtimeout with this setting:

RequestReadTimeout header=20-40,minrate=500  

Since the page load only takes a few seconds I do not understand why the Request header read timeout messages are being logged to the error log.

Why are these messages appearing and what can I do to remedy this?

Ubuntu 15.10 Server; W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Posted: 18 Apr 2021 01:24 PM PDT

I'm running Ubuntu 15.10 server on a Asrock E3C226D2I board. When I get a kernel update or run update-initramfs -u I get a warning about missing firmware:

root@fileserver:~# update-initramfs -u  update-initramfs: Generating /boot/initrd.img-4.2.0-27-generic  W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast  

I can't find much information on this particular firmware, other than it is probably for my video card. Since I'm running a server I don't really care about graphics (no monitor attached).

All works fine so I'm ignoring it for now but is there a way to fix this?

Unable to RDP as a Domain User

Posted: 18 Apr 2021 02:49 PM PDT

I am facing a problem remoting into a machine using a Domain account.

Problem Facts :

  1. The Host VM's are hosted by Company A (read Domain A). The VM's have a local administrator as well Domain 'A' based User accounts who are on "Administrators" on the VM's.
  2. I belong to a Company B (Domain B).
  3. I use a VPN provided by Company A to have access to their network.
  4. I was previously able to use mstsc from Computer on Domain B to remote into any of VM's on Domain A.
  5. Recently Company A migrated their Domain A into Domain Z.
  6. Now I am not able to remote from a computer on Domain B into a VM on Domain Z using my Domain 'Z' user account, however, I am able to login using the local user account. The error for Domain Account is generic credentials not valid.
  7. My domain 'Z' accounts are working when I remote access another VM (say VM1) using my domain account after logging into a VM2 as local admin. (VM 1 & 2 are on the Domain Z)
  8. The problem in step 6 & 7 only SEEM to occur in environment at Domain Based environment. (Domain B where my local machine is located on and Domain C where another company user is facing the same issue as me).
  9. When trying from a local machine with windows freshly installed (no domain, no AV, default OS options) over Company A provided VPN, everything works fine i.e can remote into VM using Domain Accounts.
  10. Windows 7 Enterprise as Guest. Windows 7 , 2008 R2 , 8.1 as guest VMs. 11. On guest machine, tried deactivating firewall, stopping Forefront security app and removing machine from Domain and connecting to internet directly, but still it was not connecting. (maybe some group policy is causing the issue and removing from domain does not deactivate the policy. The surprising factor was people from Company C were also facing the same issue).

How Can I troubleshoot this issue ?

UWSGI Bad Gateway - Connection refused while connecting to upstream

Posted: 18 Apr 2021 04:58 PM PDT

Trying to get a basic Django app running on nginx using UWSGI. I keep getting a 502 error with the error in the subject line. I am doing all of this as root, which I know is bad practice, but I am just practicing. My config file is as follows (it's included in the nginx.conf file):

server { listen 80; server_name 104.131.133.149;

location = /favicon.ico { access_log off; log_not_found off; }  location /static/ {      root /home/root/headers;  }    location / {      include         uwsgi_params;      uwsgi_pass      127.0.0.1:8080;  }  }  

And my uwsgi file is:

[uwsgi]  project = headers  base = /root    chdir = %(base)/%(project)  home = %(base)/Env/%(project)  module = %(project).wsgi:application    master = true  processes = 5    socket = 127.0.0.1:8080  chmod-socket = 666  vacuum = true  

As far as I can tell I am passing all requests on port 80 (from nginx.conf) upstream to localhost, which is running on my VH, where uwsgi is listening on port 8080. I've tried this with a variety of permissions, including 777. If anyone can point out what I'm doing wrong please let me know.

iLO 3 Firmware Update (HP Proliant DL380 G7)

Posted: 18 Apr 2021 03:46 PM PDT

The iLO web interface allows me to upload a .bin file (Obtain the firmware image (.bin) file from the Online ROM Flash Component for HP Integrated Lights-Out.)

enter image description here

The iLO web interface redirects me to a page in the HP support website (http://www.hp.com/go/iLO) where I am supposed to find this .bin firmware, but no luck for me. The support website is a mess and very slow, badly categorized and generally unusable.

Where can I find this .bin file? The only related link I am able to find asks me about my server operating system (what does this have to do with the iLO?!) and lets me download an .iso with no .bin file

And also a related question: what is the latest iLO 3 version? (for Proliant DL380 G7, not sure if the iLO is tied to the server model)

How do I change Jboss' logging directory

Posted: 18 Apr 2021 04:58 PM PDT

I'm running jboss-as 7.2. I'm trying to configure all log files to go to /var/log/jboss-as but only the console log is going there. I'm using the init.d script provided with the package and it calls standalone.sh. I'm trying to avoid having to modify startup scripts.

I've tried adding JAVA_OPTS="-Djboss.server.log.dir=/var/log/jboss-as" to my /etc/jboss-as/jboss-as.conf file but the init.d script doesn't pass JAVA_OPTS to standalone.sh when it calls it.

The documentation also says I should be able to specify the path via XML with the following line in standalone.xml:

<path name="jboss.server.log.dir" path="/var/log/jboss-as"/>  

However, it doesn't say where in the file to put that. Every place I try to put it causes JBoss to crash on startup saying that it can't parse the standalone.xml file correctly.

Configure Citadel to be Used For sending email using smtp

Posted: 18 Apr 2021 03:58 PM PDT

Sorry for this foolish question, but I have less knowldge about servers. So bear with me!

I have configured Citadel as directed in linode documentation and can login using the front-end for accessing citadel. I can send emails using that. How can i configure smtp and use it as a mail service for sending emails from laravel which is a php framework?. Any help will be appreciated.

I have configured it as

Enter 0.0.0.0 for listen address  Select Internal for authentication method  Specify your admin <username>  Enter an admin <password>  Select Internal for web server integration  Enter 80 for Webcit HTTP port  Enter 443 for the Webcit HTTPS port (or enter -1 to disable it)  Select your desired language  

After this i have entered mail name in /etc/mailname as

mail.domain.com  

and i can access adn sendmail using https://mail.domain.com

My laravel mail.php file 'driver' => 'smtp',

/*  |--------------------------------------------------------------------------  | SMTP Host Address  |--------------------------------------------------------------------------  |  | Here you may provide the host address of the SMTP server used by your  | applications. A default option is provided that is compatible with  | the Postmark mail service, which will provide reliable delivery.  |  */    'host' => 'mail.hututoo.com',    /*  |--------------------------------------------------------------------------  | SMTP Host Port  |--------------------------------------------------------------------------  |  | This is the SMTP port used by your application to delivery e-mails to  | users of your application. Like the host we have set this value to  | stay compatible with the Postmark e-mail application by default.  |  */    'port' => 25,    /*  |--------------------------------------------------------------------------  | Global "From" Address  |--------------------------------------------------------------------------  |  | You may wish for all e-mails sent by your application to be sent from  | the same address. Here, you may specify a name and address that is  | used globally for all e-mails that are sent by your application.  |  */    'from' => array('address' => 'no-reply@hututoo.com', 'name' => null),    /*  |--------------------------------------------------------------------------  | E-Mail Encryption Protocol  |--------------------------------------------------------------------------  |  | Here you may specify the encryption protocol that should be used when  | the application send e-mail messages. A sensible default using the  | transport layer security protocol should provide great security.  |  */    'encryption' => 'tls',    /*  |--------------------------------------------------------------------------  | SMTP Server Username  |--------------------------------------------------------------------------  |  | If your SMTP server requires a username for authentication, you should  | set it here. This will get used to authenticate with your server on  | connection. You may also set the "password" value below this one.  |  */    'username' => 'passname',    /*  |--------------------------------------------------------------------------  | SMTP Server Password  |--------------------------------------------------------------------------  |  | Here you may set the password required by your SMTP server to send out  | messages from your application. This will be given to the server on  | connection so that the application will be able to send messages.  |  */    'password' => 'paswwordtest',    /*  |--------------------------------------------------------------------------  | Sendmail System Path  |--------------------------------------------------------------------------  |  | When using the "sendmail" driver to send e-mails, we will need to know  | the path to where Sendmail lives on this server. A default path has  | been provided here, which will work well on most of your systems.  |  */    'sendmail' => '/usr/sbin/citmail -t',  

Change local password as root after configuring for MS-AD Kerberos+LDAP

Posted: 18 Apr 2021 09:58 PM PDT

I have followed this excellent post to configure Kerberos + LDAP:
http://koo.fi/blog/2013/01/06/ubuntu-12-04-active-directory-authentication/

However, there are some local users used for services.
When I try to change the password for one of those, as root, it asks for Current Kerberos password then exits:

passwd service1  Current Kerberos password:  (I hit enter)  Current Kerberos password:  (I hit enter)  passwd: Authentication token manipulation error  passwd: password unchanged  

If I switch to the local user and do passwd, it asks once for Kerberos then falls back to local:
$ passwd
Current Kerberos password:
Changing password for service1.
(current) UNIX password:

My configuration is similar to the site I posted above, and everything works fine, I just can't change the local users' passwords as root.

Thanks in advance for any help.

3.8.0-29-generic #42~precise1-Ubuntu

Update 1 2013-01-31:

# cat /etc/pam.d/common-auth  auth    [success=3 default=ignore]      pam_krb5.so minimum_uid=1000  auth    [success=2 default=ignore]      pam_unix.so nullok_secure try_first_pass  auth    [success=1 default=ignore]      pam_ldap.so use_first_pass  auth    requisite                       pam_deny.so  auth    required                        pam_permit.so  auth    optional                        pam_cap.so      # cat /etc/pam.d/common-password  password        [success=3 default=ignore]      pam_krb5.so minimum_uid=1000  password        [success=2 default=ignore]      pam_unix.so obscure use_authtok try_first_pass sha512  password        [success=1 user_unknown=ignore default=die]     pam_ldap.so use_authtok try_first_pass  password        requisite                       pam_deny.so  password        required                        pam_permit.so  password        optional        pam_gnome_keyring.so  

Create self-signed terminal services certificate and install it

Posted: 18 Apr 2021 04:51 PM PDT

The server RDP certificate expires every 6 months and is automatically recreated, meaning I need to re-install the new certificate on the client machines to allow users to save password.

Is there a straightforward way to create a self-signed certificate with a longer expiry?

I have 5 servers to configure.

Also, how do I install the certificate such that terminal services uses it?

Note: Servers are not on a domain and I'm pretty sure we're not using a gateway server.

Errors getting apache working with fcgi and suexec

Posted: 18 Apr 2021 03:58 PM PDT

I have a Debian 6 server and I was previously using Apache with mod_php but decided to switch to using fcgi instead since Wordpress was somehow causing Apache to crash. I have the following in my site's Apache config file:

Options +ExecCGI  AddHandler fcgid-script .php  FCGIWrapper /usr/lib/cgi-bin/php5 .php    SuexecUserGroup "#1001" "#1003"  

Everything works fine if I don't include the SuexecUserGroup, but it obviously then runs the script as www-data instead of the user and group above. When I include that line, I get a 500 error and the following shows up in my suexec.log file:

[2013-05-22 16:00:12]: command not in docroot (/usr/lib/cgi-bin/php5)  

Everything was installed using the packages, so I don't even know where the docroot is.

Here is my suexec info:

# /usr/lib/apache2/suexec -V   -D SUEXEC_CONFIG_DIR=/etc/apache2/suexec/   -D AP_GID_MIN=100   -D AP_LOG_EXEC="/var/log/apache2/suexec.log"   -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"   -D AP_UID_MIN=100  

And the permissions on my php5 file if that has anything to do with it:

# ls -l /usr/lib/cgi-bin/php5  -rwxr-xr-x 1 root root 7769160 Mar  4 08:25 /usr/lib/cgi-bin/php5  

Server Room Temperature Monitoring - Where to stick my probes?

Posted: 18 Apr 2021 06:09 PM PDT

I have a small server room approx 7' x 12' with an A/C unit dedicated to this room that is positioned on of the short (7') sides and blows air across the room towards the other short (7') side.

The server room is set to temp of 69F but usually will only ever get down to around 70-71F (temp measured by the thermostat control panel on the wall).

I have two 1-wire temp. monitor gauges plugged into a linux box that graphs out measured temperatures. Right now the temp. monitor gauges hang on one of the long (12') sides and are positioned closely together.

I don't think this is ideal measurement and an accurate representation of the room's real temperatures and would like to fix this. Where is it best to position the temperature sensors in a room like this? I don't think hanging them from the drop-ceiling would work since then the A/C unit would blow cold air on them (skewing the measurements terribly).

how to install ssl on tomcat 7?

Posted: 18 Apr 2021 06:49 PM PDT

I know this question might sound too easy and I should had read all docs available on internet, the true is that I did, and I had no luck, its kinda confusing for me, I have installed many times this thing but for Apache, never for Tomcat.

I want to install a certificate from GoDaddy, so, I followed this instructions

http://support.godaddy.com/help/article/5239/generating-a-csr-and-installing-an-ssl-certificate-in-tomcat-4x5x6x

I created my keyfile like this

keytool -keysize 2048 -genkey -alias tomcat -keyalg RSA -keystore tomcat.keystore  keytool -certreq -keyalg RSA -alias tomcat -file csr.csr -keystore tomcat.keystore  

I changed tomcat for mydomain.com .. is it wrong?

I created the keystore, later the csr, after that the problem comes, I add to server.xml on the config folder

<Connector port="8443" maxThreads="200"  scheme="https" secure="true" SSLEnabled="true"  keystoreFile="path to your keystore file" keystorePass="changeit" clientAuth="false" sslProtocol="TLS"/>  

Later I imported the certs

keytool -import -alias root -keystore tomcat.keystore -trustcacerts -file valicert_class2_root.crt  

and I did, but I dont have a gd_intermediate.crt and the last step is

keytool -import -alias tomcat -keystore tomcat.keystore -trustcacerts -file <name of your certificate>  

reading in other blogs I saw they import here the crt , but tomcat is the user I have to leave? or its for example only??

In the docs of tomcat I found this (http://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html)

Download a Chain Certificate from the Certificate Authority you obtained the Certificate keytool -import -alias root -keystore \ -trustcacerts -file

   And finally import your new Certificate         keytool -import -alias tomcat -keystore <your_keystore_filename> \  -file <your_certificate_filename>  

but I have no idea what is a "chain certificate" ... can somebody help me? I am really confused and lost. I am using Tomcat7

Thanks.

How to copy many Scheduled Tasks between Windows Server 2008 machines?

Posted: 18 Apr 2021 04:13 PM PDT

I have several standalone Win2008 (R1+R2) servers (no domain) and each of them has dozens of scheduled tasks. Each time we set up a new server, all these tasks have to be created on it.

The tasks are not living in the 'root' of the 'Task Scheduler Library' they reside in sub folders, up to two levels deep.

I know I can use schtasks.exe to export tasks to an xml file and then use:

schtasks.exe /CREATE /XML ...'   

to import them on the new server. The problem is that schtasks.exe creates them all in the root, not in the sub folders where they belong. There is also no way in the GUI to move tasks around.

Is there a tool that allows me to manage all my tasks centrally, and allows me to create them in folders on several machines? It would also make it easier to set the 'executing user and password'.

No comments:

Post a Comment