Sunday, August 22, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How do I identify registerable/registered domains (ones with whois) and domains without whois (subdomains)

Posted: 22 Aug 2021 10:02 PM PDT

I have an issue. I am trying to check if a given string is a valid "registered" or "registerable" domainname.

Ideally I want to see if the given string can have a valid "registrar" or not. I am already checking the string format using regex and returns true for:

  1. something.com
  2. something.com.au
  3. domains.google
  4. something.something.com

I want to be able to differentiate between "registerable" or "registered" domain from non-registerable or non-registered domain. And I want to do it without checking whois.

My application needs to accept both domains names and subdomain names. So my regex is just fine for that purpose. But I need to flag in my db if the entered value is subdomain or a domain for which I can find whois.

The whole point is to avoid hitting whois servers to retrieve information if the provided string is not a "registered" or "registerable" domain.

I did a lot of research and played around https://publicsuffix.org/list/public_suffix_list.dat but that isn't the solution. This is because for example "wixsite.com" is a valid public sufix, but it "something.wixsite.com" cannot have a whois.

Which AV and security software for windows

Posted: 22 Aug 2021 09:59 PM PDT

Which AV and security software for windows server is recommend ? lots of bugs detected by eset but it use lots of ram !

Apache Traffic Server with LetsEncrypt

Posted: 22 Aug 2021 07:07 PM PDT

I have been trying to setup a traffic server (v 8.0.5) as a reverse proxy for a few hours now. It always works when I use regular http. However when I try to implement my certificates made with LetsEncrypt, it causes issues. The problem is when I try to set off a GET request with curl I receive this error(error:1408F10B:SSL routines:ssl3_get_record:wrong version number).

ssl_multicert.config:

dest_ip=* ssl_cert_name=/etc/trafficserver/fullchain.pem ssl_key_name=/etc/trafficserver/privkey.pem

records.config:

Pastebin

remap.config:

map https://myDomain:8080 http://localhost  

Updating my website on GCP

Posted: 22 Aug 2021 07:02 PM PDT

So I recently built a simple React app that I have deployed on the Google Cloud. Everything is now working properly. The question that I have now, is since I did a simple app. I need to update it and will continue to uodate it in the future. SO wondering if I can use the same process I did when initially deploying it.

Here are the initial steps I used: run: npm run build IN GCP click on 3 bars top left go to storage click on create bucket give it a name and continue for region click multi and continue for most of the bucket creation just click continue and then create then click to upload to the bucket seacrh for the build folder to upload build app.yaml file upload yaml file to cloud then open cloud shell create a (mkdir to make) new directory gsutil cp -r gs://xxxxxxxxxxx ./xxxxxxxxxx first one is the bucket name, second one is the directory we just made switch to the folder created earlier run: gcloud app deploy

I know that some of the steps are unnecessary such as the yaml, but I should basically do the run build, upload to the bucket, sync/cp and then deploy it

express js app with nignx reverse proxy 'redirected you too many times.'

Posted: 22 Aug 2021 04:47 PM PDT

im using pm2 to run my MERN app as a proccess, when i type curl http://localhost:3000 in the console the output is indeed from my app. but nginx reverse proxy is not working. the app is running on a vps and connected to a domain name. but im getting 'redirected you too many times.' from the browser.

server.js

const PORT = 3000  const app = express()    const router = express.Router()    const { persistor, store } = createPersistor()    const serverRenderer = (req, res, next) => {     app.get('/*', function (req, res) {     res.sendFile(path.join(__dirname, '../build/index.html'), function (err) {       if (err) {         res.status(500).send(err)       }     })   })     const context = {}     fs.readFile(path.resolve('./build/index.html'), 'utf8', (err, data) => {     if (err) {       console.error(err)       return res.status(500).send('An error occurred')     }     return res.send(       data.replace(         '<div id="root"></div>',         `<div id="root">         ${ReactDOMServer.renderToString(           <Provider store={store}>             <StaticRouter location={req.url} context={context}>               <PersistGate loading={null} persistor={persistor}>                 <App />               </PersistGate>             </StaticRouter>           </Provider>         )}         </div>`       )     )   })  }    router.use('^/$', serverRenderer)    router.use(    express.static(path.resolve(__dirname, '..', 'build'))  )    app.use(router)    app.listen(PORT, () => {    console.log(`SSR running on port ${PORT}`)  })  

nginx/sites-available/default:

server {  listen 80 default_server;  listen [::]:80 default_server;    server_name 198.51.100.255;  return 302 $scheme://mysite.com$request_uri;    location / {      # First attempt to serve request as file, then      # as directory, then fall back to displaying a 404.      try_files $uri $uri/ =404;      proxy_pass http://localhost:3000;      proxy_http_version 1.1;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection 'upgrade';      proxy_set_header Host $host;      proxy_cache_bypass $http_upgrade;   }  }  

and there is no error log.

note: i didnt configure SSL yet.

What is the reason for ERROR! Unable to retrieve file contents message to be displayed

Posted: 22 Aug 2021 03:36 PM PDT

I'm trying to run a playbook that include the following task:

- name: Find Domain.pfx (certificate) filepath     ansible.windows.win_find:      paths: C:\      patterns: [ 'Domain*.pfx' ]    register: cert_path    when: "'certificate_autohrities' in group_names"  

This task is included in main.yml file:

---  # tasks file for install  - import_tasks: find_pfx_filepath  

When running the following playbook:

---  - name: install lab    hosts: all    roles:    - install  

The following error is displayed:

[root@ansible ansible]# ansible-playbook playbooks/install_lab.yml -i inventories/onpremis/domain.com/lab_j.yml -vvv  ...  ERROR! Unable to retrieve file contents  Could not find or access '/ansible/playbooks/find_pfx_filepath' on the Ansible Controller.  If you are using a module and expect the file to exist on the remote, see the remote_src option  

To mention that the following path: /ansible/playbooks/find_pfx_filepath that is shown in the error message - is wrong. find_pfx_filepath.yml is a task file inside install role folder...

Cross-reference Ansible variables within the same mapping

Posted: 22 Aug 2021 10:43 PM PDT

How can I define an Ansible variable whose value is another variable in the same mapping structure?

To allow sensible namespacing of variables, I am defining mapping structures like this, where some values depend on other variables in the same structure:

acme:    directory:      hostname: "acme-staging-v02.api.letsencrypt.org"  letsencrypt:    config_dir: "/etc/letsencrypt"    keys_dir: "{{ letsencrypt.config_dir }}/keys"    csrs_dir: "{{ letsencrypt.config_dir }}/csr"    certs_dir: "{{ letsencrypt.config_dir }}/certs"    accounts_dir: "{{ letsencrypt.config_dir }}/accounts"    csr_file: "{{ letsencrypt.csrs_dir }}/{{ site_domain }}.csr"    account_key_file: "{{ letsencrypt.csrs_dir }}/{{ acme.directory.hostname }}"    email_address: "certificate-reminders@{{ site_domain }}"  

This fails because Ansible can't resolve the values which reference others within the same data structure:

recursive loop detected in template string: {{ letsencrypt.config_dir }}/keys

So I thought the lookup vars would allow deferring that resolution:

acme:    directory:      hostname: "acme-staging-v02.api.letsencrypt.org"  letsencrypt:    config_dir: "/etc/letsencrypt"    keys_dir: "{{ lookup('vars', 'letsencrypt.config_dir') }}/keys"    csrs_dir: "{{ lookup('vars', 'letsencrypt.config_dir') }}/csr"    certs_dir: "{{ lookup('vars', 'letsencrypt.config_dir') }}/certs"    accounts_dir: "{{ lookup('vars', 'letsencrypt.config_dir') }}/accounts"    csr_file: "{{ lookup('vars', 'letsencrypt.csrs_dir') }}/{{ site_domain }}.csr"    account_key_file: >-      {{ lookup('vars', 'letsencrypt.csrs_dir') }}/{{ acme.directory.hostname }}    email_address: "certificate-reminders@{{ site_domain }}"  

This fails, because Ansible is attempting to resolve that lookup immediately:

No variable found with this name: letsencrypt.config_dir


Of course I could split them out so they're separate variables. That defeats my purpose, though, of keeping the strongly related variables all grouped in the same namespace.

So what will allow me to define the data structure so that some values can depend on other variables in the same structure?

How do I merge multiple .vmdk files into a single one on OSX?

Posted: 22 Aug 2021 03:24 PM PDT

There is a solution here Converting Multiple VMware disk image to single disk image but I am using Mac OS.

What are the commands to do the same thing on Mac? I have around 10 .vmdk files ranging from 200MB to 2GB and I'd like to run them as a single .vmdk file.

Why does Windows prompt me to grant admin permission when I am already connected as an admin?

Posted: 22 Aug 2021 10:42 PM PDT

I have a remote session going on Windows Server 2019. I am logged in as a user who is a member of the local Administrators on the server. When I try to copy a folder I get the message: enter image description here

If I click Continue it goes ahead with the copy. There are other servers I work with where I don't see this behavior.

Why is Windows prompting me this way? Is there a server wide setting that controls this?

Windows Server 2019 - renaming files with some naming convention

Posted: 22 Aug 2021 03:18 PM PDT

We received a ton of files from our sponsor and the files are all formatted like this

[ABCD] Title - Id - Description [RS][x264][CHKSUM].txt  

I could manually rename one at a time but there are more than 500 files that are sent on a weekly basis.

RS - Reviewer Signature (usually the same person) CHKSUM - for the file or something.

What I need is the following

Title - Id - Description.txt  

I need to have the [ABCD] and anything after [RS] removed but before the .txt

I am open to suggestions (powershell, or 3rd party app)

Apache ProxypassMatch configure to only match requests from self (127.0.0.1)

Posted: 22 Aug 2021 05:02 PM PDT

This is what I have now. I'm trying to only allow / do proxy'ing for requests from the localhost, meaning anyone else shouldn't be able to visit /ha_proxy and be directed to, say, the 169.25 IP. Is there a way to do this?

SSLProxyEngine on  SSLProxyVerify none  SSLProxyCheckPeerCN off  SSLProxyCheckPeerExpire off  ProxyTimeout 3600  ProxyPassMatch "^/ha_proxy/([0-9])/(.*)$" "https://169.25.0.$1:43/$2"  ProxyPassMatch "^/manager_proxy/(.*?)/(.*)$" "https://$1/$2"  ProxyPassMatch "^/rest_proxy/(.*)$" "https://127.0.0.1:9/$1"  

IMAP and SMTP Still use Self Signed SSL even I have issued a Mail Server SSL in Cyberpanel

Posted: 22 Aug 2021 05:53 PM PDT

I using cyberpanel on CentOS 7 and I setup SSL for my postfix and dovecot. But I still got "SSL Invalid" caused the self-signed SSL even I have configure SSL using Lets Encrypt.

This is /etc/postfix/main.cf

smtpd_tls_cert_file = /etc/letsencrypt/live/mail.domain.net/fullchain.pem  smtpd_tls_key_file = /etc/letsencrypt/live/mail.domain.net/privkey.pem  

This is /etc/dovecot/dovecot.conf

ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem  ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  ....  local_name mail.domain.net {          ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem          ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  }    local_name mail.sub.domain.net {          ssl_cert = </etc/letsencrypt/live/mail.sub.domain.net/fullchain.pem          ssl_key = </etc/letsencrypt/live/mail.sub.domain.net/privkey.pem  }    

This is /etc/dovecot/conf.d/10-ssl.conf

ssl = required  ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem  ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  

All file has pointed to correct SSL file. However, when I was trying to login IMAP and SMTP using SSL, I got error: SSL Invalid caused self-signed certificate www.example.com (not mail.domain.net).

When I check using command: openssl s_client -servername mail.domain.net -connect mail.domain.net:993

CONNECTED(00000003)  depth=0 C = US, ST = Denial, L = Springfield, O = Dis, CN = www.example.com  verify error:num=18:self signed certificate  verify return:1  depth=0 C = US, ST = Denial, L = Springfield, O = Dis, CN = www.example.com  verify return:1  ---  Certificate chain   0 s:/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com     i:/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  ---  Server certificate  -----BEGIN CERTIFICATE-----  MIIDizCCAnOgAwIBAgIJAJDbjRXJistMMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV  BAYTAlVTMQ8wDQYDVQQIDAZEZW5pYWwxFDASBgNVBAcMC1NwcmluZ2ZpZWxkMQww  CgYDVQQKDANEaXMxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbTAeFw0yMTA2Mjcx  NzI0MDBaFw0zMTA2MjUxNzI0MDBaMFwxCzAJBgNVBAYTAlVTMQ8wDQYDVQQIDAZE  ZW5pYWwxFDASBgNVBAcMC1NwcmluZ2ZpZWxkMQwwCgYDVQQKDANEaXMxGDAWBgNV  BAMMD3d3dy5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC  ggEBAMlprp3IA+Hbl43gIyiv0VQ/8DGKI3hH1E2GnVCuZKHbiwQr/j1vtnJIsFUt  r6AVwW+LAvDVT723CgivZMiXtrO1ItsOoU9ifV6w+nak8cFsFJZKaprXgU6dlQk8  K0xVMvqTEJa29v1igusmpl9Kv80cPjUCEMfcIjxvo51Ob0rV3Eyale+yXImj9Va/  YU7aICSvuLlHkPGf8VRtu+HZOyhzBerROikUN6p2hqMIjK2SUh0uUzbBFRwZHL6O  e2E9Bq2QQ0Cr5Fpid/XPwDPdxnGdnGcjNWv14vqeRDwErGpjGzn3FyiXQdAoB3wG  jJauwCAm680NMuH/mTVvUcal1CcCAwEAAaNQME4wHQYDVR0OBBYEFLAfEGhJad43  w9Pf90yeZg3i/AYtMB8GA1UdIwQYMBaAFLAfEGhJad43w9Pf90yeZg3i/AYtMAwG  A1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJifYgBsDverQjQ+3x8GWbmz  T4qw4uxlPLal8+wZrmuFxkTdXBixtd7xT3J7NPpXK1I/i9SUMsT9EqwMpvtz8Ybi  409QvsCb/LyADPI4eorbGIByYZa+wTHNbLtMa+PybwoHsLANGvwVf35tuXWhV2u7  /PxxvwZwPRXyDiNZYl6CXm282eqUu2iVU7j5+Mon5OCWN82Z5rUU67DFKyhyE6MC  j4tsWO5ylBKhhZ7A5EJd0gqSSIo495XnaNazXr2KeTOfwrBPOj2dHO1CnMnkubJm  wd31QwGht2wX/yGBtRNk+fxrA4ObKgva/bRLYpcZr6axva+vMFmJ2bVC1W3pUmU=  -----END CERTIFICATE-----  subject=/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  issuer=/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  ---  No client certificate CA names sent  Peer signing digest: SHA512  Server Temp Key: ECDH, P-256, 256 bits  ---  SSL handshake has read 1590 bytes and written 441 bytes  ---  New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384  Server public key is 2048 bit  Secure Renegotiation IS supported  Compression: NONE  Expansion: NONE  No ALPN negotiated  SSL-Session:      Protocol  : TLSv1.2      Cipher    : ECDHE-RSA-AES256-GCM-SHA384      Session-ID: 88F2CCFDE63FE391E9824F596E0C8300E44CB306F969E2A1C0AFE3B75E5A4D74      Session-ID-ctx:       Master-Key: E22198E25F15AA193B9E73446CB934276DF90987DFC75B1B74DDAF3247CA8436CDB93B3274102188B3470DF1A4EFB0D1      Key-Arg   : None      Krb5 Principal: None      PSK identity: None      PSK identity hint: None      TLS session ticket lifetime hint: 300 (seconds)      TLS session ticket:      0000 - e6 78 ae 14 e1 04 0d b4-64 82 65 9e 14 ad 32 9c   .x......d.e...2.      0010 - f3 f0 c2 fd f9 12 5b bf-0f 50 75 79 64 5c bb ba   ......[..Puyd\..      0020 - 31 f6 37 bd 1c b2 e7 dc-d9 02 c7 53 f4 f9 0c a6   1.7........S....      0030 - d4 51 6a 60 6b 34 04 41-fd b3 7d 53 14 ff 1d b4   .Qj`k4.A..}S....      0040 - a2 82 67 6e da d7 80 02-b0 9f 6d 82 b4 17 72 cf   ..gn......m...r.      0050 - 30 05 54 fc 8c be 60 6d-e5 0f b8 25 04 f3 43 6d   0.T...`m...%..Cm      0060 - 7e 13 f1 85 02 03 90 a2-50 82 64 43 aa 79 b8 ee   ~.......P.dC.y..      0070 - 86 08 ef 7a ac 4b c7 86-57 bc 09 a4 9a bb 23 92   ...z.K..W.....#.      0080 - cb 18 74 a4 90 c5 b1 8b-39 3c cc 69 ee e8 fb 08   ..t.....9<.i....      0090 - 60 93 ea 17 35 d5 58 0d-ee 1b 68 c2 98 d0 e9 9c   `...5.X...h.....      00a0 - f5 a7 24 9b 29 0a 48 6b-70 f8 a5 9a 7c e5 e8 88   ..$.).Hkp...|...        Start Time: 1624855926      Timeout   : 300 (sec)      Verify return code: 18 (self signed certificate)  ---  +OK Dovecot ready.  

This is log on mail server. systemctl status postfix -l

230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<RLYR5sLFeh62/Xx7>  Jun 28 00:42:37 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<WF4U5sLFlym2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<nasX5sLFoim2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<BFYY5sLFrCm2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<YQkZ5sLFrSm2/Xx7>  

Please help me, which file or config should I check.

Can Docker volumes be mounted from a device instead of bind mounting a directory?

Posted: 22 Aug 2021 04:47 PM PDT

I'd like to setup a docker service (docker-compose) on a Linux host with a container mounting an entire [removable] physical hard drive as a docker volume.

I know that it's trivial to setup bind mounts in docker-compose and I can manually mount the drive on the host. That approach does work, but it involves manual steps with an opportunity for human error. Bad things happen when this service starts with a blank directory on the host's drive root partition. Worse things happen if the drive is unplugged while still mounted.

What I'd like to do is have docker mount the device directly as a volume. This would have the advantage of fewer / simpler manual steps and a failsafe that if the drive was missing the service would fail to startup. This also would ensure the drive us unmounted when the service is stopped.

Given that volumes are basically just OS mounts, it feels like this should be simple, but numerous searches through the documentation and I'm still no further forward.

Folder Redirection GPO fails with 502 "Can't create folder" error for folders that already exist

Posted: 22 Aug 2021 10:01 PM PDT

For a very long time now, we have relied on a registry setting to handle folder redirection for our Documents folders. Part of the login script sets HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders\Personal to use \\fileserver\%username%. We have other scripts that automatically create and share those folders with working permissions at the same time user accounts are created.

This works, but I know it's not the typical way to handle this. Especially with Windows 10, the semi-annual feature updates have occasionally broken the process. Therefore I'd like to start using the built-in (and supported) Folder Redirection GPOs.

My test policy is setup with these options:

Basic - Redirect everyone to the same location  Redirect to the user's home direction  Grant the user exclusive rights: UNCHECKED  Move the contents of Documents: UNCHECKED  Also apply redirection to Windows 2000 etc: CHECKED, but we have no systems like this  Leave the folder in the new location when the policy is removed  

I have a test account with the old registry change removed from the login script. For other details, I'm testing from a Windows 10x64 1909 Enterprise computer. We have Server 2019 DCs, but we're at the 2012 functional level because I have one stinking Windows XP machine left I have to support :(

I have this almost working, but unfortunately I get a 502 error in Event Viewer:

  Failed to apply policy and redirect folder "Documents" to "\\fileserver\testuser\".  Redirection options=0x80009210  The following error occurred: "Can't create folder "\\fileserver\testuser"".  Error details: "This security ID may not be assigned as the owner of this object.".  

The thing is... the folder already exists, and while the user is indeed not the owner, they do have modification rights. I do not want individual users to have rights to create new folders in the root of this share. I do not want to let Folder Redirection create these folders. We are comfortable with our existing user creation scripts. I just want it to use the folder that is already there.

Is this possible, or will I have to make extensive modifications to our account creation scripts, file share structure, and update a few thousand existing shares? (Each of our current folders are their own shared, and not simply a directory in a parent shared folder).

What does proxy_send_timeout really do in Nginx?

Posted: 22 Aug 2021 03:55 PM PDT

In Nginx documentation there are directives concering three differnet timeouts, that can be configured for "backend" servers as follows:

proxy_connect_timeout Defines a timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.

This is easy to understand -- Nginx is about to connect to an upstream "backend" server and if it can't connect within an X amount of time it will give up and return an error. The server is unreachable, has too many connections, etc.

proxy_read_timeout Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the proxied server does not transmit anything within this time, the connection is closed.

This also makes sense -- Nginx has already established a TCP connection with the "backend" server and is now about to actually send the request, but the server is taking long time to process and if it takes more than X amount of time, close the connection and return to the user.

I actually was surprised that Nginx closes the connection I thought it will keep the connection but return an error to the user. Sounds expensive to re-establish those "backend" TCP connections every time somethin times out.

proxy_send_timeout Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed.

This one I don't quite get. I do have a theory but I want someone to confirm it. The only value I can think of to this timeout is if the request payload is huge (e.g. big POST request with JSON, or document the user want to save). Transmitting the request to the "backend" will require breaking the request into smaller MTU TCP segments and sending those as "chunks" of the original request. So technically we didn't actually send the request until we transmit all chunks to the server successfully. Is Nginx measuring the time between each chunk of the request? Is that what a "write" means in the doc? Once the request is actually sent, Nginx will start measuring the proxy_read_timeout?

NOQUEUE: reject: RCPT from 451 4.3.5 Server configuration

Posted: 22 Aug 2021 07:03 PM PDT

Unfortunately after some changes on master.cf and main.cf in postfix I'm getting these errors when I try to send out a e-mail.

  root@mail:/etc/postfix# tail -f /var/log/syslog | grep "rene.brakus@gkri.hr"      Jul 25 17:39:46 mail dovecot: auth-worker(2529): Debug: sql(rene.brakus@gkri.hr,193.198.1.XX): query: SELECT password FROM mailbox WHERE username = 'rene.brakus@gkri.hr' AND active = '1'      Jul 25 17:39:46 mail dovecot: auth: Debug: client passdb out: OK#0111#011user=rene.brakus@gkri.hr      Jul 25 17:39:46 mail postfix/smtpd[8289]: NOQUEUE: filter: RCPT from mail.gkri.hr[193.198.1.XX]: <rene.brakus@gkri.hr>: Sender address triggers FILTER smtp:[127.0.0.1]:10025; from=<rene.brakus@gkri.hr> to=<rene.brakus@gkri.hr> proto=ESMTP helo=<mail.gkri.hr>      Jul 25 17:39:47 mail postfix/smtpd[8289]: NOQUEUE: reject: RCPT from mail.gkri.hr[193.198.1.XX]: 451 4.3.5 Server configuration problem; from=<rene.brakus@gkri.hr> to=<rene.brakus@gkri.hr> proto=ESMTP helo=<mail.gkri.hr>  

this is my master.cf

#  # Postfix master process configuration file.  For details on the format  # of the file, see the master(5) manual page (command: "man 5 master").  #  # Do not forget to execute "postfix reload" after editing this file.  #  # ==========================================================================  # service type  private unpriv  chroot  wakeup  maxproc command + args  #               (yes)   (yes)   (yes)   (never) (100)  # ==========================================================================  # smtp      inet  n       -       -       -       -       smtpd -v  submission inet n       -       -       -       -       smtpd    -o smtpd_enforce_tls=no    -o smtpd_use_tls=yes    -o smtpd_sasl_auth_enable=yes    -o smtpd_client_restrictions=permit_sasl_authenticated,reject    -o smtpd_data_restrictions=reject_unauth_pipelining    -o receive_override_options=  #  -o smtpd_tls_security_level=encrypt  #  -o milter_macro_daemon_name=ORIGINATING  smtps     inet  n       -       -       -       -       smtpd    -o smtpd_tls_wrappermode=yes    -o smtpd_sasl_auth_enable=yes  #  -o smtpd_client_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #628       inet  n       -       -       -       -       qmqpd  pickup    fifo  n       -       -       60      1       pickup  cleanup   unix  n       -       -       -       0       cleanup  qmgr      fifo  n       -       n       300     1       qmgr  tlsmgr    unix  -       -       -       1000?   1       tlsmgr  rewrite   unix  -       -       -       -       -       trivial-rewrite  bounce    unix  -       -       -       -       0       bounce  defer     unix  -       -       -       -       0       bounce  trace     unix  -       -       -       -       0       bounce  verify    unix  -       -       -       -       1       verify  flush     unix  n       -       -       1000?   0       flush  proxymap  unix  -       -       n       -       -       proxymap  proxywrite unix -       -       n       -       1       proxymap    smtp      unix  -       -       -       -       -       smtp  # When relaying mail as backup MX, disable fallback_relay to avoid MX loops  relay     unix  -       -       -       -       -       smtp          -o smtp_fallback_relay=          -o smtp_helo_timeout=5 -o smtp_connect_timeout=5  showq     unix  n       -       -       -       -       showq  error     unix  -       -       -       -       -       error  retry     unix  -       -       -       -       -       error  discard   unix  -       -       -       -       -       discard  local     unix  -       n       n       -       -       local  virtual   unix  -       n       n       -       -       virtual  lmtp      unix  -       -       -       -       -       lmtp  anvil     unix  -       -       -       -       1       anvil  scache    unix  -       -       -       -       1       scache  #  # ====================================================================  # Interfaces to non-Postfix software. Be sure to examine the manual  # pages of the non-Postfix software to find out what options it wants.  #  # Many of the following services use the Postfix pipe(8) delivery  # agent.  See the pipe(8) man page for information about ${recipient}  # and other message envelope options.  # ====================================================================  #  # maildrop. See the Postfix MAILDROP_README file for details.  # Also specify in main.cf: maildrop_destination_recipient_limit=1  #  maildrop  unix  -       n       n       -       -       pipe    flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}  #  # ====================================================================  #  # Recent Cyrus versions can use the existing "lmtp" master.cf entry.  #  # Specify in cyrus.conf:  #   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4  #  # Specify in main.cf one or more of the following:  #  mailbox_transport = lmtp:inet:localhost  #  virtual_transport = lmtp:inet:localhost  #  # ====================================================================  #  # Cyrus 2.1.5 (Amos Gouaux)  # Also specify in main.cf: cyrus_destination_recipient_limit=1  #  #cyrus     unix  -       n       n       -       -       pipe  #  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}  #  # ====================================================================  # Old example of delivery via Cyrus.  #  #old-cyrus unix  -       n       n       -       -       pipe  #  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}  #  # ====================================================================  #  # See the Postfix UUCP_README file for configuration details.  #  uucp      unix  -       n       n       -       -       pipe   flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)  #  # Other external delivery methods.  #  ifmail    unix  -       n       n       -       -       pipe    flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)  bsmtp     unix  -       n       n       -       -       pipe    flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient  scalemail-backend unix  -       n       n       -       2       pipe    flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop}$  mailman   unix  -       n       n       -       -       pipe    flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py    ${nexthop} ${user}    smtp-amavis unix -      -       n     -       2  smtp      -o smtp_data_done_timeout=1200      -o smtp_send_xforward_command=yes      -o disable_dns_lookups=yes      -o max_use=20  127.0.0.1:10025 inet n  -       n     -       -  smtpd  #    -o content_filter=  #    -o local_recipient_maps=  #    -o relay_recipient_maps=  #    -o smtpd_restriction_classes=      -o smtpd_delay_reject=no      -o smtpd_client_restrictions=permit_mynetworks,reject  #    -o smtpd_helo_restrictions=  #    -o smtpd_sender_restrictions=      -o smtpd_recipient_restrictions=permit_mynetworks,reject     -o smtpd_data_restrictions=reject_unauth_pipelining      -o smtpd_end_of_data_restrictions=      -o mynetworks=127.0.0.0/8      -o smtpd_error_sleep_time=0      -o smtpd_soft_error_limit=1001      -o smtpd_hard_error_limit=1000      -o smtpd_client_connection_count_limit=0      -o smtpd_client_connection_rate_limit=0      -o receive_override_options=no_header_body_checks,no_unknown_recipient_chec$  #    -o local_header_rewrite_clients=      -o smtpd_helo_restrictions=reject_sender_login_mismatch  

main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version    # Debian specific:  Specifying a file name will cause the first  # line of that file to be used as the name.  The Debian default  # is /etc/mailname.  #myorigin = /etc/mailname  #05.09.2016  dodao smtpd_sender_restrictions => napravio modifikaciju (dodao ko$  #smtpd_sender_restrictions = check_sender_access hash:/etc/postfix/sender_access  #smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipi$  smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)  biff = no  # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    # TLS parameters  smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  #dolje promjena u "yes" 07092016  smtpd_use_tls=yes  smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache  smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache    # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for  # information on enabling SSL in the smtp client.    myhostname = mail.gkri.hr  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  mydestination = mail.gkri.hr, localhost.localdomain, localhost  #relayhost =  mynetworks = 127.0.0.0/8, 193.198.1.29  mailbox_size_limit = 0  recipient_delimiter = +  #inet_interfaces = all  inet_interfaces = 193.198.1.XX, 127.0.0.1  inet_protocols = ipv4  #virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf  #virtual_alias_maps = hash:/etc/postfix/virtual  virtual_alias_domains=gkri.hr  virtual_gid_maps = static:107  virtual_mailbox_base = /var/vmail  virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_domains_maps.cf  virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf  virtual_minimum_uid = 105  virtual_transport = virtual  virtual_uid_maps = static:105    relay_domains = proxy:mysql:/etc/postfix/mysql_relay_domains_maps.cf    content_filter = smtp-amavis:[127.0.0.1]:10024  smtpd_data_restrictions = reject_unauth_pipelining,check_sender_access pcre:/et$    receive_override_options=no_address_mappings  #promjena na "yes" 07092016 - red dolje  #smtpd_sasl_auth_enable = yes  smtpd_sasl_security_options = noanonymous  #broken_sasl_auth_clients = yes    smtpd_sasl_type = dovecot  smtpd_sasl_path = private/auth      #dodao nakon 5-tog reda još na smtpd_recipient_restrictions 24032017  smtpd_recipient_restrictions =    permit_mynetworks,  #  permit_sasl_authenticated,  #  check_sender_access hash:/etc/postfix/sender_checks,  #  check_client_access hash:/etc/postfix/rbl_override,    reject_unauth_destination,    reject_unauth_pipelining,  # 30032017 jer neki nakladnici nemaju reverse  reject_unknown_reverse_client_ho$  # reject_invalid_helo_hostname,  #  reject_non_fqdn_helo_hostname,    reject_non_fqdn_sender,    check_sender_access hash:/etc/postfix/exempt_senders,   check_policy_service inet:127.0.0.1:10023,    reject_non_fqdn_recipient,    reject_unknown_sender_domain,    reject_unknown_recipient_domain,    reject_invalid_hostname,    reject_rbl_client zen.spamhaus.org,    reject_rbl_client bl.spamcop.net,    reject_rbl_client b.barracudacentral.org,  #  reject_rbl_client dnsbl.sorbs.net,    reject_rbl_client cbl.abuseat.org,   reject_rbl_client blackholes.easynet.nl,    reject_rbl_client cbl.abuseat.org,  #  reject_rbl_client proxies.blackholes.wirehub.net,    reject_rbl_client sbl.spamhaus.org  #  reject_rbl_client opm.blitzed.org,  #  reject_rbl_client dnsbl.njabl.org,  #  reject_rbl_client list.dsbl.org,  #  reject_rbl_client multihop.dsbl.org      permit    message_size_limit = 104857600  virtual_mailbox_limit = 104857600  smtpd_tls_loglevel = 1    smtpd_client_restrictions = permit_mynetworks,          permit_sasl_authenticated,          reject_unauth_destination,   reject_rbl_client zen.spamhaus.org,          reject_rbl_client bl.spamcop.net,          reject_rbl_client cbl.abuseat.org,          reject_rbl_client blackholes.easynet.nl,          reject_rbl_client cbl.abuseat.org,          reject_rbl_client proxies.blackholes.wirehub.net,          reject_rbl_client bl.spamcop.net,          reject_rbl_client sbl.spamhaus.org,          reject_rbl_client opm.blitzed.org         # reject_rbl_client dnsbl.njabl.org,         # reject_rbl_client list.dsbl.org,         # reject_rbl_client multihop.dsbl.org          #radi poteskoce kada se pokrene (oprez)  #smtpd_recipient_restrictions = check_sender_access hash:/etc/postfix/sender_ac$    #trebalo bi blokirati neke att (trenutno ne radi)  #mime_header_checks = regexp:/etc/postfix/mime_header_checks    #dodatno 05.09.2016 u 14:57h  smtp_destination_concurrency_limit = 3  smtp_destination_rate_delay = 1s  smtp_extra_recipient_limit = 10      #dodano 19092016 radi DKIM  milter_protocol = 6    milter_default_action = accept  smtpd_milters = inet:localhost:1427143  non_smtpd_milters = inet:localhost:1427143      #dodatno 30032017  disable_vrfy_command = yes  smtpd_delay_reject = yes  smtpd_helo_required = yes  #Rate throttlanje    smtpd_client_connection_rate_limit = 20  smtpd_error_sleep_time = 10s  smtpd_soft_error_limit = 3  #smptd_hard_error_limit = 5    #dodatno18022019 radi vivainfo, maknuo 04072019  smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated  

EDIT

ul 25 21:10:01 mail postfix/smtpd[9434]: connect from 93-137-104-174.adsl.net.t-com.hr[93.137.104.XXX]  Jul 25 21:10:01 mail dovecot: auth: Debug: auth client connected (pid=0)  Jul 25 21:10:01 mail dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=smtp#011nologin#011lip=193.198.1.XX#011rip=93.137.104.174#011resp=AHJlbmUuYnJha3VzQGdrcmkuaHIAUmViYmVjYSsxMTEx (previous base64 data may contain sensitive data)  Jul 25 21:10:01 mail dovecot: auth-worker(9431): Debug: sql(rene.brakus@gkri.hr,93.137.104.XXX): query: SELECT password FROM mailbox WHERE username = 'rene.brakus@gkri.hr' AND active = '1'  Jul 25 21:10:01 mail dovecot: auth: Debug: client passdb out: OK#0111#011user=rene.brakus@gkri.hr  Jul 25 21:10:02 mail postfix/trivial-rewrite[9437]: warning: do not list domain gkri.hr in BOTH virtual_alias_domains and virtual_mailbox_domains  Jul 25 21:10:02 mail postfix/smtpd[9434]: NOQUEUE: reject: RCPT from 93-137-104-XXX.adsl.net.t-com.hr[93.137.104.XXX]: 554 5.7.1 <rene.brakus@gmail.com>: Relay access denied; from=<rene.brakus@gkri.hr> to=<rene.brakus@gmail.com> proto=ESMTP helo=<[192.168.88.XXX]>  Jul 25 21:10:05 mail postfix/smtpd[9434]: disconnect from 93-137-104-174.adsl.net.t-com.hr[93.137.104.XXX]  Jul 25 21:10:57 mail postfix/smtpd[9434]: connect from 93-137-104-174.adsl.net.t-com.hr[93.137.104.XXX]  Jul 25 21:10:57 mail dovecot: auth: Debug: client in: AUTH#0112#011PLAIN#011service=smtp#011nologin#011lip=193.198.1.XX#011rip=93.137.104.174#011resp=AHJlbmUuYnJha3VzQGdrcmkuaHIAUmViYmVjYSsxMTEx (previous base64 data may contain sensitive data)  Jul 25 21:10:57 mail dovecot: auth-worker(9431): Debug: sql(rene.brakus@gkri.hr,93.137.104.XXX): query: SELECT password FROM mailbox WHERE username = 'rene.brakus@gkri.hr' AND active = '1'  Jul 25 21:10:57 mail dovecot: auth: Debug: client passdb out: OK#0112#011user=rene.brakus@gkri.hr  Jul 25 21:10:58 mail postfix/smtpd[9434]: NOQUEUE: reject: RCPT from 93-137-104-174.adsl.net.t-com.hr[93.137.104.XXX: 554 5.7.1 <rene.brakus@gmail.com>: Relay access denied; from=<rene.brakus@gkri.hr> to=<rene.brakus@gmail.com> proto=ESMTP helo=<[192.168.88.XXX]>  

Can't set http-only and secure cookies in Apache

Posted: 22 Aug 2021 05:03 PM PDT

My website is on Apache which is hosted in AWS VPS.

I tried setting http-only flag and secure flag by editing security.conf file but upon checking the headers via https://hackertarget.com/http-header-check/ I see that there is no change and cookies are still without these flags.

I followed these steps:

Ensure you have mod_headers.so enabled in Apache HTTP server

Add following entry in /etc/apache2/conf-enabled/security.conf

Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure

Restart Apache HTTP server to test

Can anyone help me out?

can telnet to a service, but not access service ports directly

Posted: 22 Aug 2021 06:01 PM PDT

We're running a variety of services on our cloud provider. Everything normally works fine, but occaisionally we end up with issues connecting to 1 host (which has our repos on it). We haven't been able to find a solution to the connectivity problem, so we completely rebuilt the host at a different cloud provider. Things had been running fine, but the same connectivity issue is starting again. I'll try to summarize clearly:

The host that is having connectivity issues is running Gitlab. We also ssh into that host a fair amount.

When we run into connectivity issues, we cannot access ssh, git, https etc. Pinging the host works fine. I can telnet to port 22, and get a response:

Connected to xyz. Escape character is '^]'. SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.1

I can access any port on the host via Telnet, and I get back a response immediately. If I try to connect to the same host via ssh, I get:

ssh -v -v me@xyz
OpenSSH_7.9p1, LibreSSL 2.7.3 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 48: Applying options for * debug2: resolve_canonicalize: hostname xyz is address debug2: ssh_connect_direct debug1: Connecting to xyz [xyz] port 22. debug1: connect to address xyz port 22: Operation timed out ssh: connect to host xyz port 22: Operation timed out

If I disconnect from our local network, and connect via hotspot to the Internet, I'm able to access said host properly. This only happens to users on our corporate network.

I went down the path of checking all our local routers/firewalls, and couldn't find any issue. I then connected to the Internet from the external side of our corporate firewall, and the connectivity issues immediately started again.

I've spoken with our cloud provider (Google) and they see nothing wrong with our cloud configuration or servers. I've spoken with our Internet provider, and they can't see anything wrong either.

Anyone have any ideas?

Thanks.

Is it possible to persist stick-tables across server reloads in Haproxy

Posted: 22 Aug 2021 07:03 PM PDT

I have read https://github.com/haproxy/haproxy/blob/master/examples/seamless_reload.txt on how to set up persisting the server state when you reload, but this does not save/load the contents on stick-tables. Is that possible?

IIS rewrite to add a query variable to every request

Posted: 22 Aug 2021 04:06 PM PDT

I know next to nothing about configuring IIS, but have been assigned a task to add a query variable to every request that is made through an IIS server that's being used as a Web proxy.

We look for any URL where the first path segment after the domain is a fixed string, like /bongo/ and redirect that to a back end server. I have it working to perform redirects, but I'm pretty sure I have a lot of garbage that I don't need in the following config, which I got from another answer.

<?xml version="1.0" encoding="UTF-8"?>   <configuration>       <system.webServer>           <rewrite>               <rules>                   <rule name="Route the requests for bongo Player" stopProcessing="true">                       <match url="^bongo/(.*)" />                       <conditions>                           <add input="{CACHE_URL}" pattern="^(https?)://" />                       </conditions>                       <action type="Rewrite" url="https://bongo.fse.companyinc.com/bongo/{R:1}" />                       <serverVariables>                           <set name="HTTP_ACCEPT_ENCODING" value="" />                       </serverVariables>                   </rule>               </rules>               <outboundRules>                   <rule name="ReverseProxyOutboundRule1" preCondition="ResponseIsHtml1">                       <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^http(s)?://bongo.fse.companyinc.com/bongo/(.*)" />                       <action type="Rewrite" value="/{R:2}" />                   </rule>                   <rule name="RewriteRelativePaths" preCondition="ResponseIsHtml1">                       <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^/bongo/(.*)" negate="false" />                       <action type="Rewrite" value="/{R:1}" />                   </rule>                   <preConditions>                       <preCondition name="ResponseIsHtml1">                           <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />                       </preCondition>                   </preConditions>               </outboundRules>           </rewrite>          <tracing>              <traceFailedRequests>                  <add path="*">                      <traceAreas>                          <add provider="ASP" verbosity="Verbose" />                          <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />                          <add provider="ISAPI Extension" verbosity="Verbose" />                          <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module,FastCGI,WebSocket,Rewrite,RequestRouting" verbosity="Verbose" />                      </traceAreas>                      <failureDefinitions timeTaken="00:00:30" statusCodes="200-299,300-399,400-499,500-599" verbosity="Warning" />                  </add>              </traceFailedRequests>          </tracing>       </system.webServer>   </configuration>  

1) I'm not even sure that I need the outbound rules.

2) The main problem that I have is that I want to always append a query variable, whether there are existing query variables or not. I've tried too many configurations to list here, but I can't seem to always get it to append the variable when proxying. Also, I think there needs to be some sort of conditional that does something different if there are already arguments or if the one to be added is the only one.

I'd appreciate help from any IIS gurus out there.

504 Gateway Timeout even when Timeout is set to 600

Posted: 22 Aug 2021 08:04 PM PDT

I have the follow unresolved problem regarding PHP and Apache. I have a long running script that ALWAYS return 504 Gateway Timeout after 30 seconds running. However, if I check in /server-process, I can see that the request is still going on. Before being suggested, I'm not expecting it to be a cron job as this long process should finish just under a minute and for current situation, I needed it to run inside browser. Here's what I've set the server:

/etc/apache2/apache2.conf  - Timeout 600    /etc/php/7.0/apache2/php.ini  - max_execution_time = 300  - max_input_time = 300  

Here's my server info:

root@izzystorage-core:~# lsb_release -r  Release:        16.04    root@izzystorage-core:~# apache2 -v  Server version: Apache/2.4.18 (Ubuntu)  Server built:   2017-09-18T15:09:02    root@izzystorage-core:~# php -v  PHP 7.0.22-0ubuntu0.16.04.1 (cli) ( NTS )  Copyright (c) 1997-2017 The PHP Group  Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies      with Zend OPcache v7.0.22-0ubuntu0.16.04.1, Copyright (c) 1999-2017, by Zend Technologies  

Do you guys have any Idea what's going on with my server?

edit

  1. I don't have mod_proxy
  2. This is only one server without load balancer in between

Apache using wrong version of PHP

Posted: 22 Aug 2021 03:01 PM PDT

I'm trying to downgrade PHP from 5.5 to 5.3 (not by choice).

I thought I uninstalled php 5.5 by typing the following sudo apt-get remove "php*"

I then installed php 5.3 by following these instructions

How ever when I called phpinfo() inside a script and run it, I still get PHP Version 5.5.9-1ubuntu4.19

But when I call php -v from the command line I get PHP 5.3.29 (cli) (built: Sep 2 2016 10:56:16)

When I cd to root directory and type locate libphp5.so there is only 1 path found and that's the path that Apache is already using.

How do I tell Apache to use 5.3?

How can I see the expire date of a file in Tivoli Storage Manager?

Posted: 22 Aug 2021 06:01 PM PDT

If I understand correctly, when I do an incremental backup and a file has been removed from the client, the server marks it as inactive which makes it eligible for purging when the expire time has passed.

Using the dsmc client on a linux server I can see the list of inactive files, but no information is shown on how long they will be kept.

How can I know exactly when a specific inactive file will expire? Also: where do I configure the expire time and how do I see which value it is currently set to?

Php fatal error loading 'Mail/mimeDecode' even with php-pear install

Posted: 22 Aug 2021 09:04 PM PDT

I have centos machine where I have install using yum install php-pear. So I tried this in my php page

require_once 'Mail/RFC822.php';  require_once 'Mail/mimeDecode.php';  

I get this error "Warning: require_once(Mail/mimeDecode.php): failed to open stream: No such file or directory in /var/www/html/pro1/ast/include.inc.php on line 36 Fatal error: require_once(): Failed opening required 'Mail/mimeDecode.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/html/pro1/ast/include.inc.php on line 36". What else to be install ?

Cannot SSH to Azure Linux VM from terminal

Posted: 22 Aug 2021 04:06 PM PDT

ssh azureuser@xxx.cloudapp.net : 22

when I connect and enter the password I created, then it just goes back to the local prompt and does not actually show the remote prompt.

I'm thinking I need to go about creating the ssh key instead of a user/password?

http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-use-ssh-key/

Jboss 7.1 - server.log gets created without world-read access

Posted: 22 Aug 2021 09:04 PM PDT

On our systems running JBoss 6.1, server.log is always created (every night) with protections of 664.

But on our server running JBoss 7.1, server.log gets created with protections of 600

Without world-read or group-read protections, Nagios can't look for errors.

I assume this is set somewhere in jboss-logging.xml somehow. Any advice?

Apache as a reverse proxy not working for gunicorn

Posted: 22 Aug 2021 05:03 PM PDT

My Goal is to let client connect through https connection but apache serve it to my application which is running on same server over http. Here is my minimalistic apache configuration file (for incoming http requests i am simply redirecting all request to https):

NameVirtualHost 1.2.3.4:443  NameVirtualHost 1.2.3.4:80    LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so    <VirtualHost 1.2.3.4:443>    # Admin email, Server Name (domain name) and any aliases    ServerAdmin admin@abc.com    ServerName abc.com    ServerAlias www.abc.com    RequestReadTimeout header=90 body=90      DocumentRoot /path/to/my/project    LogLevel warn    WSGIDaemonProcess abc_ssl processes=2 maximum-requests=500 threads=10    WSGIProcessGroup abc_ssl    WSGIScriptAlias / /path/to/my/project.wsgi    WSGIApplicationGroup %{GLOBAL}      SSLEngine on    SSLCertificateFile /home/django/.ssh/abc.crt    SSLCertificateKeyFile /home/django/.ssh/server.key    SSLCertificateChainFile /home/django/.ssh/abc.ca-bundle      RequestHeader set X-FORWARDED-SSL "on"    RequestHeader set X-FORWARDED_PROTO "https"    ProxyRequests off    ProxyPreserveHost on      <Location /stream/>        Order Allow,Deny        Allow from All    </Location>      ProxyPass /stream/ http://127.0.0.1:8001/    ProxyPassReverse /stream/ http://127.0.0.1:8001/    </VirtualHost>  

Clearly the gunicorn is running and listening on http://127.0.0.1:8001/:

2013-08-31 05:05:51 [15025] [INFO] Starting gunicorn 0.17.2  2013-08-31 05:05:51 [15025] [INFO] Listening at: http://127.0.0.1:8001 (15025)  2013-08-31 05:05:51 [15025] [INFO] Using worker: eventlet  2013-08-31 05:05:51 [15044] [INFO] Booting worker with pid: 15044  2013-08-31 05:05:51 [15045] [INFO] Booting worker with pid: 15045  2013-08-31 05:05:51 [15046] [INFO] Booting worker with pid: 15046  

But on browser i can only see NetworkError: 404 NOT FOUND - https://abc.com/stream/. Please help me i am stuck, really appreciate that.

Proper Passenger + Apache Permissions to fix error "No such file or directory - config/environment.rb"

Posted: 22 Aug 2021 10:01 PM PDT

I am having a problem with Passenger not being able to start due to an apparently common issue in which Passenger claims: No such file or directory - config/environment.rb.

I have searched the web high and low and this appears to be a permissions related issue. It is my understanding that Passenger runs as the owner of config.ru and config/environment.rb file. In my case this owner is "admin". I am running the app root in the home directory of the admin user. So I believe I have the correct permissions set using: sudo chown -R admin:admin /home/admin/www and sudo chmod -R 755 /home/admin/www

where the app root is located at: /home/admin/www/app

Here is my virtual server config file:

 <VirtualHost *:80>      ServerName track.example.com      DocumentRoot /home/admin/www/app/current/public      <Directory /home/admin/www/app/current/public>      Options FollowSymLinks      AllowOverride none      Order allow,deny      Allow from all      </Directory>      PassengerResolveSymlinksInDocumentRoot on      RailsBaseURI /      PassengerAppRoot /home/admin/www/app      RailsEnv production         ErrorLog ${APACHE_LOG_DIR}/error.log      # Possible values include: debug, info, notice, warn, error, crit,      # alert, emerg.      LogLevel debug      CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  

I am running Ubuntu 12.0.4, Rails 3.2.8, Ruby 1.9.3, Passenger 3.0.18, Apache 2

Thanks for your help.

weird routes automatically being added to windows routing table

Posted: 22 Aug 2021 08:04 PM PDT

On our windows 2003 domain, with XP clients, we have started seeing routes appearing in the routing tables on both the servers and the clients. The route is a /32 for another computer on the domain. The route gets added when one windows computer connects to another computer and needs to authenticate.

For example, if computer A with ip 10.0.1.5/24 browses the c: drive of computer B with ip 10.0.2.5/24, a static route will get added on computer B like so:

dest     netmask         gateway  interface  10.0.1.5 255.255.255.255 10.0.2.1 10.0.2.5   

This also happens on windows authenticated SQL server connections. It does not happen when computers A and B are on the same subnet.

None of the servers have RIP or any other routing protocols enabled, and there are no batch files etc setting routes automatically.

There is another windows domain that we manage with a near identical configuration that is not exhibiting this behaviour. The only difference with this domain is that it is not up to date with its patches.

Is this meant to be happening? Has anyone else seen this? Why is it needed when I have perfectly good default gateways set on all the computers on the domain?!

No comments:

Post a Comment