Saturday, August 14, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Apache / Websockets: Direct Sockets to Different (Or To Both) Ports Based On Path - ProxyPass

Posted: 14 Aug 2021 10:11 PM PDT

I am running a website on Apache. I have two apps that use web sockets, one is based on Node, which uses port 3000 and the other is based on Phoenix, which uses port 4000. Both apps also use a reverse proxy. For example, I have something like this:

  <Location /node/>      ProxyPass http://127.0.0.1:3000/      ProxyPassReverse http://127.0.0.1:3000/    </Location>      <Location /phoenix/>      ProxyPass http://127.0.0.1:4000/      ProxyPassReverse http://127.0.0.1:4000/    </Location>  

However, I am having problems getting the web sockets to work. I have something like this this set up for the Node app (outside the <Location> context):

  RewriteCond %{QUERY_STRING} transport=polling       [NC]    RewriteRule /(.*)           http://127.0.0.1:3000/$1 [P]    RewriteCond %{HTTP:Upgrade} websocket               [NC]    RewriteRule /(.*)           ws://127.0.0.1:3000/$1  [P]  

I developed my Node app a few years ago and everything worked perfectly. However, I am currently developing the Phoenix app, and I don't know how to handle directing the sockets. Eventually, I plan on phasing out the Node app completely, but I need to keep it running for our users until the new app is developed. However, I still need the new app running at the same time on the website so I can develop it. It would be nice to get sockets working on both apps at the same time.

Why Volume Bytes Used become so high on AWS Aurora RDS MySQL cluster?

Posted: 14 Aug 2021 10:09 PM PDT

Billed volume storage on our Aurora RDS MySQL cluster went up from 70GB to 1200GB within few hours and it is just not getting down. enter image description here

AWS premium support seems to be clueless. They made us increase version of Aurora RDS MySQL 5.7 from 2.09.1 to 2.10.0 saying that there is some bug in currently used version and that space should become free on reboot. We did the upgrade, manually rebooted the cluster post upgrading but it made no difference to billed volume storage.

Space actually used by our application database is 69GB including indexes. Free space in this database is 15GB so total used space should be ~85GB. There are no binary logs, temporary tables, we are also not using replicas (this is a single node cluster).

SELECT table_schema "DataBase Name", sum( data_length + index_length ) / 1024 / 1024 / 1024 "Occupied Space in GB", sum( data_free )/ 1024 / 1024 / 1024 "Free Space in GB", sum( data_length + index_length + data_free ) / 1024 / 1024 / 1024 "Total Database Size in GB" FROM information_schema.TABLES GROUP BY table_schema;

| DataBase Name      | Occupied Space in GB | Free Space in GB | Total Database Size in GB |  +--------------------+----------------------+------------------+---------------------------+  | information_schema |       0.000198364258 |   0.875976562500 |            0.876174926758 |  | app920             |      69.161712646484 |  15.512695312500 |           84.674407958984 |  | mysql              |       0.019073486328 | 925.045898437500 |          925.064971923828 |  | performance_schema |       0.000000000000 |   0.000000000000 |            0.000000000000 |  | sys                |       0.000015258789 |  21.512695312500 |           21.512710571289 |  +--------------------+----------------------+------------------+---------------------------+  5 rows in set (0.05 sec)  

Timing of the sudden spike on 26th June matches with maintenance window that we set for this cluster (it is our night time when there is no traffic). We suspect something went wrong during the maintenance window. Our application makes no use of internal databases like mysql. There were also no scheme changes made by us.

We want to understand what made Volume Bytes Used become so high here and how to prevent it?

Removing httpd built from tarball

Posted: 14 Aug 2021 09:49 PM PDT

I need help to remove httpd that I built from source ball, the instructions for installation are written from this blog

yum remove can't delete installed httpd but httpd -v command still showed that httpd still exist on my server

[root@localhost httpd-2.4.28]# httpd -v  Server version: Apache/2.4.28 (Unix)  Server built:   Aug 15 2021 09:21:05`  

After doing some google search, I read I need to delete manually added folder and files

[root@linuxhelp1 httpd-2.4.28]# make install  Making install in srclib  make[1]: Entering directory `/root/httpd-2.4.28/srclib'   Making install in apr  make[2]: Entering directory `/root/httpd-2.4.28/srclib/apr'   make[3]: Entering directory `/root/httpd-2.4.28/srclib/apr'   make[3]: Nothing to be done for `local-all' .  make[3]: Leaving directory `/root/httpd-2.4.28/srclib/apr'   /root/httpd-2.4.28/srclib/apr/build/mkdir.sh /usr/local/apache2/lib /usr/local/apache2/bin /usr/local/apache2/build            /usr/local/apache2/lib/pkgconfig /usr/local/apache2/include  mkdir /usr/local/apache2  mkdir /usr/local/apache2/lib  mkdir /usr/local/apache2/bin  mkdir /usr/local/apache2/build  mkdir /usr/local/apache2/lib/pkgconfig  mkdir /usr/local/apache2/include  mkdir /usr/local/apache2/manual  make[1]: Leaving directory `/root/httpd-2.4.28  

I did make uninstall but it doenst work, should I delete all of this folder and everything in it?

/usr/local/apache2/lib   /usr/local/apache2/bin   /usr/local/apache2/build   /usr/local/apache2/lib/pkgconfig   /usr/local/apache2/include  /usr/local/apache2  /usr/local/apache2/lib  /usr/local/apache2/bin  /usr/local/apache2/build  /usr/local/apache2/lib/pkgconfig  /usr/local/apache2/include  /usr/local/apache2/manual  

How to grep linux word, pipe it into a file , then search for the browers (Mozilla,firefox,safri) and count their output using command line linux?

Posted: 14 Aug 2021 10:02 PM PDT

We are looking for a string within quotation marks where the word Linux occurs somewhere between parenthesis which resides somewhere inside the quotation marks.

Can I create host-specific users in Postgres? (ex: postgres@localhost)

Posted: 14 Aug 2021 07:14 PM PDT

My Question

Can I set permissions on a user (ex: postgres) such that that user is only allowed to login from TCP localhost, but not the Internet?

Trusted Sockets vs Passwords for Remotes

I get that you can initialize postgres to allow local users to login without a password, and remote hosts to login with a password:

initdb \              -D "$POSTGRES_DATA_DIR/" \              --username postgres --pwfile "$PWFILE" \              --auth-local=trust --auth-host=password  

Intranet vs Internet

For any system that's connecting across the internet I want to use a user that has a very, very strong (non-memorable) random 128-bit string.

For local and intranet access, however, I'd prefer to be able to have a username and password that I can remember (and type).

Can I do this... or do I just have to set up one user per system that's allowed to connect, with a .pgpass on each?
(I don't want to share keys in plaintext files between computers)

how to allow POST to php file in apache only from same-origin?

Posted: 14 Aug 2021 05:55 PM PDT

I have a website with a form that, when submitted, successfully sends a POST request to a .php file on the server (Apache 2.4.48).

However, when I let Javascript handle the submitting through a JS fetch() request the server responds with a 405 error.

So I analysed the request and response headers for the two POST and they are almost identical, so I am confused on why the first method works and the other gets refused.

This is the request/response when using the form (the exclamation points are where the fetch() differs):

GENERAL  Request URL: https://example.com/php/script.php  Request Method: POST  Status Code: 302   Remote Address: 160.153.133.187:443  Referrer Policy: origin-when-cross-origin      RESPONSE HEADERS  cache-control: max-age=0                    !  content-length: 0  content-type: text/html; charset=UTF-8  date: Sat, 14 Aug 2021 23:21:34 GMT  expires: Sat, 14 Aug 2021 23:21:34 GMT  location: https://example.com/pages/form-submitted.html#submitted  server: Apache  vary: User-Agent  x-powered-by: PHP/8.0.8      REQUEST HEADERS  :authority: example.com  :method: POST  :path: /php/script.php  :scheme: https  accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9     !  accept-encoding: gzip, deflate, br  accept-language: en-GB,en;q=0.9,es-ES;q=0.8,es;q=0.7,it-IT;q=0.6,it;q=0.5  cache-control: no-cache  content-length: 96  content-type: application/x-www-form-urlencoded     !  dnt: 1  origin: https://example.com  pragma: no-cache  referer: https://example.com/contact  sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"  sec-ch-ua-mobile: ?0  sec-fetch-dest: document        !  sec-fetch-mode: navigate        !  sec-fetch-site: same-origin  sec-fetch-user: ?1  upgrade-insecure-requests: 1  user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36      FORM DATA  name: Tai Zen  email: taizen@example.com  phone-number: 12345678910  privacy-consent: on  submit:   

This instead is the request/response when sending a POST through fetch():

GENERAL  Request URL: https://example.com/php/script.php  Request Method: POST  Status Code: 302   Remote Address: 160.153.133.187:443  Referrer Policy: same-origin      RESPONSE HEADERS  cache-control: max-age=2741  content-length: 0  content-type: text/html; charset=UTF-8  date: Sat, 14 Aug 2021 23:31:44 GMT  expires: Sun, 15 Aug 2021 00:17:26 GMT  location: https://example.com/405  server: Apache  vary: User-Agent  x-powered-by: PHP/8.0.8      REQUEST HEADERS  :authority: example.com  :method: POST  :path: /php/script.php  :scheme: https  accept: */*  accept-encoding: gzip, deflate, br  accept-language: en-GB,en;q=0.9,es-ES;q=0.8,es;q=0.7,it-IT;q=0.6,it;q=0.5  cache-control: no-cache  content-length: 471  content-type: multipart/form-data; boundary=----WebKitFormBoundary28YayN0mmqwdpQh0  dnt: 1  origin: https://example.com  pragma: no-cache  referer: https://example.com/contact  sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"  sec-ch-ua-mobile: ?0  sec-fetch-dest: empty  sec-fetch-mode: same-origin  sec-fetch-site: same-origin  user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36    FORM DATA  name: Tai Zen  email: taizen@example.com  phone-number: 12345678910  privacy-consent: on  submit:  

Should I change something server side to allow POST requests to that specific file?

However, I would like it to receive POST requests only from the JS I wrote and not external entities, but I don't know exactly how to do it. I tried what has been suggested on this answer, but it did not work, the server gave a 500 error. I suppose that it may be because I am on a shared hosting plan and I do not full access to my Apache settings.

How to have a multiple port app in the same GKE pod (using cli)?

Posted: 14 Aug 2021 05:15 PM PDT

Presentation

Working on an Elixir Umbrella app (a general app managing multiple app), I included two web app within the main one, each one with its own URL and port (admin.example.com:8081 && www.example.com:8080).

I recently deployed the app onto Google Kubernetes Engine, following this tutorial. Though I've had some problems from times to times, I managed to complete it and have one website online accessible (can't access the other).

Configuration

Here is the production Dockerfile

FROM elixir:alpine    ARG app_name=prod  ARG phoenix_subdir=.  ARG build_env=prod    ENV MIX_ENV=${build_env} TERM=xterm  WORKDIR /opt/app  RUN apk update --no-cache \    && apk upgrade --no-cache \    && apk add --update --no-cache nodejs npm make build-base openssl ncurses-libs libgcc libstdc++ \    && mix local.rebar --force \    && mix local.hex --force  COPY . .    RUN mix do deps.get, compile  RUN cd apps/admin/assets \    && npm rebuild node-sass \    && npm install \    && ./node_modules/webpack/bin/webpack.js \    && cd .. \    && mix phx.digest    RUN cd apps/app/assets \    && npm rebuild node-sass \    && npm install \    && ./node_modules/webpack/bin/webpack.js \    && cd .. \    && mix phx.digest  RUN mix release ${app_name} \    && mv _build/${build_env}/rel/${app_name} /opt/release \    && mv /opt/release/bin/${app_name} /opt/release/bin/start_server      FROM alpine:latest    ARG hello    RUN apk update --no-cache \    && apk upgrade --no-cache \    && apk --no-cache --update add ca-certificates openssl-dev bash openssl libc6-compat libgcc libstdc++ ncurses-libs \    && apk add --no-cache --virtual .erlang-build gcc g++ libc-dev \    && mkdir -p /usr/local/bin \    && wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 \    -O /usr/local/bin/cloud_sql_proxy \    # && wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \    # && wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.33-r0/glibc-2.33-r0.apk \    # && apk add --no-cache glibc-2.33-r0.apk \    && chmod +x /usr/local/bin/cloud_sql_proxy \    && mkdir -p /tmp/cloudsql  ENV GCLOUD_PROJECT_ID=${project_id} \    REPLACE_OS_VARS=true  EXPOSE ${PORT}  EXPOSE 8081  WORKDIR /opt/app  COPY --from=0 /opt/release .  CMD (/usr/local/bin/cloud_sql_proxy \    -projects=${GCLOUD_PROJECT_ID} -dir=/tmp/cloudsql &); \    exec /opt/app/bin/start_server start  

as well as the cloudbuild.yaml used by GKE to build the pod.

steps:    name: "gcr.io/cloud-builders/docker"    args: ["build", "-t", "gcr.io/next-borders-v1/prod:$_TAG",         "--build-arg", "project_id=next-borders-v1", ".",         "--file=./prod.Dockerfile"]  images: ["gcr.io/next-borders-v1/prod:$_TAG"]  

With these two files in hand, I follow this set of commands:

gcloud builds submit --substitutions=_TAG=v0.2 .  kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello:v1 --port 8080  kubectl expose pod hello-web --type=LoadBalancer --port=80 --target-port=8080  

I also tried to expose multiple ports

kubectl delete svc hello-web  kubectl expose pod hello-web --type=LoadBalancer --port=80,8080,8081 --target-port=8081  

I also added the IP in the DNS configuration, so I could try to access the apps through their designated URL (they includes a server which can filter through the URL request).

Questions

So, here are my few questions as a beginner: Can a GKE pod handle multiple port in one app? If yes, can I do it through the cli, and how? Or do I have to use a configuration file? If not, what is the best way? Two pods, one for the app and the other for the admin website?

Observation

There is actually a similar thread, but it doesn't talk about the GKE command line interface, and the tutorial I followed doesn't explain the configuration of file neither their usage. Would the configuration file be the solution, I have so far no clues about how to write it or use it.

Documentation I looked to try to find an answer

Internet access via two wireguard nodes chain

Posted: 14 Aug 2021 06:06 PM PDT

I have the following network nodes, all connected in one Wireguard network.

  • Cellphone
  • Home router behind ISP NAT (country X)
  • VPS with dedicated IP (country Y)

VPS acts as a server in Wireguard VPN for all nodes because only it has static IP. I use it to access my home network via a cellular connection. It is also configured as a gateway for the cellphone for secure internet access.

Is that possible to have internet access on the cellphone via the home router? Like, I want to access sites that are available only in country X while travelling. So the traffic route will be:

Cellphone (country Z) -> VPS (country Y) -> Router (country X) -> example.com  

Let's say obtaining a server with dedicated IP in country X is not achievable. Thanks.

mount nfs as another folder on home

Posted: 14 Aug 2021 03:56 PM PDT

I have purchased a WD-Ex2 NAS and am trying to share a folder via nfs with my ubuntu machine.

This folder will be used only by this machine and I want to be able to have execute permissions with my user. I would like it to be treated as one more home folder.

I am mounting the folder as follows

$ sudo cat /etc/fstab  ...  #nfs mycloud  192.168.0.151:/nfs/tmp_msigs60 /media/tmp_msigs60  nfs     defaults,user,relatime,rw,exec    0       0  

but I don't have execute permissions and the owner is user # 501 not my user

I have tried mounting with the following options

192.168.0.151:/nfs/tmp_msigs60 /media/tmp_msigs60  nfs     defaults,user,relatime,rw,exec,uid=1000,gid=1000,umask=002    0       0  

but when putting uid=1000,gid=1000,umask=002 I get the error:

mount.nfs: an incorrect mount option was specified  

Another thing I have tried is to edit /etc/idmapd.conf and change nobody and nogroup for my user, but I have not had any result either

[General]    Verbosity = 0  Pipefs-Directory = /run/rpc_pipefs      [Mapping]  Nobody-User = rodrigo  Nobody-Group = rodrigo  

Another thing that I have noticed is that the speed of writing and reading starts with high speed, but it decreases considerably while transferring the file in that folder. I do not know what are the recommended parameters to mount it efficiently, I have seen that sometimes buffer sizes are used as parameters

Apache2 Virtual Host Proxy Forward

Posted: 14 Aug 2021 03:47 PM PDT

Hi all I am trying to use a virtual host to forward streaming.fusion.tk to my internal emby server form my webserver.

I have setup a config file called streaming.conf in /etc/apache2/sites-available/

<VirtualHost *>  ServerName streaming.fusion.tk  ServerAdmin fusion@localhost    ProxyRequests off  <Proxy "*">  Order deny, allow  Allow from all  </Proxy>    ProxyPass / http://192.168.0.203:8096/  ProxyPassReverse / http:192.168.0.203:8096/    </VirtualHost>  

I enabled the site using sudo nano a2ensite streaming.conf and restarted the apache2 service

When I try and get to the site I am not able to access anything.

Some help would be appriciated.

Docker Registry pull/push 443 only

Posted: 14 Aug 2021 04:19 PM PDT

I've set up a Docker Registry (port 5000), which is then accessible to the internet via Reverse-Proxy (HAproxy) via https (port 443).

My reverse-proxy isn't listening on port 80 (for various reasons) - only 443.

However, when I try to pull/push images to the registry, I get this error:

> docker push dockerreg.mydomain.tld/foo/bar:tag  The push refers to repository [dockerreg.mydomain.tld/foo/bar]  67e5bc702bd3: Layer already exists  1ee6a18298af: Layer already exists  0d8d066a4449: Layer already exists  ....  402111a9b517: Layer already exists  5be968ab3b04: Layer already exists  b8d33b7d28fe: Layer already exists  Patch http://dockerreg.mydomain.tld/v2/foo/bar/blobs/uploads/840a9fc2-5c10-4c0e-b674-82f76c3794a3?_state=vcTZPbOrQmhcKwilCyutNGwVpFjvWigJCApZHA834757Ik5hbWUiOiJmb3Rvd2V0dGVyL2NsZWFuIiwiVVVJRCI6Ijg0MGE5ZmMyLTVjMTAtNGMwZS1iNjc0LTgyZjc2YzM3OTRhMyIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMS0wOC0xNFQyMTozODo1Mi42MzgxNjY5NTdaIn0%3D:  dial tcp 1.2.3.4:80: i/o timeout  

So apparently it tries to access the registry via http/80

I was able to use the docker login command with https://dockerreg.... but the docker pull/push commands can't be run with a https://

Is there any way to access my docker registry without a https-redirect on port 80 of my reverse-proxy?

How to better understand IPv6 to block requests

Posted: 14 Aug 2021 02:44 PM PDT

With IPv4, whenever I realize any strange requests coming to my server I can easily block the IPv4 from further requests (I can block on my iptablets, or in my .htaccess file...). However with IPv6 it's not that easy because it's pretty simple to change the IP address, or even worse, it's pretty easy to rotate thousands of IPv6 addresses to make thousands of requests in a short time coming all from different IP addresses.

With IPv4 this was not such a big problem because it would be very expensive to own/rotate thousands of IPv4 addresses. Even companies like Linode or Digital Ocean make lots of questions to you if you start adding more than a few IP address on your account (even if you pay for those addresses, they will make you lots of questions like if you are using those addresses to send spam, to DDoS...).

So my question is this: in the IPv6 address, is there some "part" or "substring" (that is mostly fixed) that I can reliably blacklist since the other "part" (that changes) is probably from the same person or the same network? Take for example this address:

2001:0db8:85a3:0000:0000:1111:2222:3333  

Can I tell, from the address above, that if I block all the IPs containing "2001:0db8:85a3:0000:0000:1111" it will probably come from the same person/computer?

Thank you!

Windows GPO printer deployment not appearing for new profiles

Posted: 14 Aug 2021 04:30 PM PDT

Old printers that have always deployed via Print Management > Deploy via GPO are now not deploying for new profiles.

the only Changes have been to my Settings GPO with regard to PrintNightmare and disallowing point and print... Under Computer>Policies>AdminTemps>Printers>Point and Print Restrictions>

Users can only point and print to these servers> disabled Users can only point and print to machines in their forest > disabled When installing drivers for a new connection > show warning and prompt When updating drivers for an existing connection > show warning and prompt

But New printers do not appear. If I try to deploy the printer via User preferences (instead of the Print management > deploy via GPO) it complains about the driver not being available on the client PC.

Kubernetes Pod OOMKilled Issue

Posted: 14 Aug 2021 07:01 PM PDT

The scenario is we run some web sites based on an nginx image in kubernetes cluster. When we had our cluster setup with nodes of 2cores and 4GB RAM each. The pods had the following configurations, cpu: 40m and memory: 100MiB. Later, we upgraded our cluster with nodes of 4cores and 8GB RAM each. But kept on getting OOMKilled in every pod. So we increased memory on every pods to around 300MiB and then every thing seems to be working fine.

My question is why does this happen and how do I solve it. P.S. if we revert back to each node being 2cores and 4GB RAM, the pods work just fine with decreased resources of 100MiB.

Prevent Windows PowerShell console from flashing up

Posted: 14 Aug 2021 08:01 PM PDT

CustomApp is registered with a URI Scheme in Windows 10 so it launches when Chrome browser visits CustomApp://userid@departmentid

Computer\HKEY_CLASSES_ROOT\CustomApp\shell\open\command

C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File "C:\Program Files\CustomApp\bin\launch-customapp.ps1" -uri "%1"  

Works great for launching the CustomApp but the blue Windows PowerShell console flashes up briefly during execution. How can I prevent it from popping up?

I've tried these parameters but the console window still flashes up.

-WindowStyle Hidden  -NonInteractive  -NoLogo  

Windows server GPO, how to force SSID connection if in range

Posted: 14 Aug 2021 07:01 PM PDT

I have many wifi networks, but only one of these are suitable for domain computers of my windows 2016 domain.

Can I setup a GPO to force a particular SSID usage if in the range? Many times I found that users choosed the wrong network and then the wrong ssid became the prefered one.

I alread set up a GPO but this just add a profile in the SSID list and does nothing about connection priority.

enter image description here

Consider that all SSID signal power are the same because they are broadcast by the same antennas.

Windows Server 2012: easiest way to monitor ports for error 4625 NTLM attacks

Posted: 14 Aug 2021 03:02 PM PDT

I'm getting thousands of hack attacks on a Windows server resulting in Security log error 4625 entries. Hackers are using random IPs, so the usual RDPguard, Syspeace, etc. tools don't work. Port 3389 is closed on the server, so I'm surprised at the continued attacks.

I'd like to figure out what local ports the attackers are connecting to for their attempts, but all the automated tools I've found only look at IP. And the default Windows server logs also only show IP and remote port, not local port.

I know I can manually look at Wireshark logs, but that's labor-intensive. I'd like to find a tool that monitors failed logins and simply corroborates them with the local port, so I know what ports to close. Ideally, this doesn't generate gigantic logs or require constant monitoring; the tool would preferably be triggered by bad logins and collect the port and service info. Any ideas?

Zabbix text value trigger

Posted: 14 Aug 2021 06:00 PM PDT

I am trying to configure a Zabbix trigger for an external check which has to react if the value returned by the external check is different from :

             Slave_SQL_Running: Yes    Replication running    Connection to host.com closed.  

Using {db2.playtech.ru.com:mysql_replica_check.sh.last(60)}<>expression throws syntax error.

Is there a way to configure this trigger in Zabbix?

can't set secure_file_priv on mysql 5.7 Ubuntu 16.04

Posted: 14 Aug 2021 06:00 PM PDT

How do I set a value for secure_file_priv?

I found this which tells what settings may be used https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_secure_file_priv

The mysql server starts without any command line options. There is nothing to override its' .cnf file.

user@server:~$ ps aux | grep [m]ysql  mysql     4495  0.0  7.0 544368 144924 ?       Ssl  09:16   0:02 /usr/sbin/mysqld  

Running :~$ mysqld --verbose --help tells me

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf

Only the 2nd file exists. It is the beginning of a symlink chain to /etc/mysql/my.cnf.migrated as follows...

user@server:~$ ls -l /etc/mysql/my.cnf  lrwxrwxrwx 1 root root 24 Aug 12 15:15 /etc/mysql/my.cnf -> /etc/alternatives/my.cnf  user@server:~$ ls -l /etc/alternatives/my.cnf  lrwxrwxrwx 1 root root 26 Aug 12 15:15 /etc/alternatives/my.cnf -> /etc/mysql/my.cnf.migrated  user@server:~$ ls -l /etc/mysql/my.cnf.migrated  -rw-r--r-- 1 root root 4455 Dec 13 03:18 /etc/mysql/my.cnf.migrated  

I've tried setting values for secure_file_priv in that last file, restarting the mysql server and even rebooting the Ubuntu server. No matter the value that is set the command

mysql> SELECT @@GLOBAL.secure_file_priv;  

always returns /var/lib/mysql-files/.

I've also searched for other .cnf files and tried setting the value for secure_file_priv in each of them

user@server:~$ find /etc -iname "m*.cn*" -type f  /etc/mysql/conf.d/mysql.cnf  /etc/mysql/my.cnf.migrated  /etc/mysql/my.cnf.fallback  /etc/mysql/mysql.conf.d/mysqld.cnf  /etc/mysql/mysql.cnf  

No matter. After making a change, restarting the server, and checking the value with

mysql> SELECT @@GLOBAL.secure_file_priv;  

the result /var/lib/mysql-files/ is always the same. It doesn't change.

What do I need to do to set a value for secure_file_priv?

How To Fix Padding Oracle (CVE-2016-2107) On Ubuntu/Apache/PHP

Posted: 14 Aug 2021 04:00 PM PDT

I am trying to fix CVE-2016-2107.

I consulted several sites, which do not seem to provide a clear answer for all cases:

I use Apache2 2.4.12 with PHP 5.5.26.

I ran: apt-get install openssl libssl-dev and sudo apt-get install libssl1.0.0.

It installed new OpenSSL but Apache/PHP still uses old installation, phpinfo() shows:

openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 1.0.1f 6 Jan 2014 OpenSSL Header Version OpenSSL 1.0.1f 6 Jan 2014

Proof new OpenSSL is installed:

ubuntu@ip-xxxxx:/usr/bin$ openssl version OpenSSL 1.0.2h 3 May 2016

dpkg -l | grep ssl ii libflac8:amd64 1.3.0-2ubuntu0.14.04.1 amd64 Free Lossless Audio Codec - runtime C library ii libgnutls-openssl27:amd64 2.12.23-12ubuntu2.2 amd64 GNU TLS library - OpenSSL wrapper ii libio-socket-ssl-perl 1.965-1ubuntu1 all Perl module implementing object oriented interface to SSL sockets ii libnet-smtp-ssl-perl 1.01-3 all Perl module providing SSL support to Net::SMTP ii libnet-ssleay-perl 1.58-1 amd64 Perl module for Secure Sockets Layer (SSL) ii libssl-dev:amd64 1.0.2h-1+deb.sury.org~trusty+1 amd64 Secure Sockets Layer toolkit - development files ii libssl-doc 1.0.1f-1ubuntu2.15 all Secure Sockets Layer toolkit - development documentation ii libssl1.0.0:amd64 1.0.1f-1ubuntu2.19 amd64 Secure Sockets Layer toolkit - shared libraries ii libssl1.0.2:amd64 1.0.2h-1+deb.sury.org~trusty+1 amd64 Secure Sockets Layer toolkit - shared libraries ii openssl 1.0.2h-1+deb.sury.org~trusty+1 amd64 Secure Sockets Layer toolkit - cryptographic utility ii python-openssl 0.13-2ubuntu6 amd64 Python 2 wrapper around the OpenSSL library ii ssl-cert 1.0.33 all simple debconf wrapper for OpenSSL

apt-cache policy libssl1.0.2. libssl1.0.2-dbg: Installed: (none) Candidate: 1.0.2h-1+deb.sury.org~trusty+1 Version table: 1.0.2h-1+deb.sury.org~trusty+1 0 500 http://ppa.launchpad.net/ondrej/php5/ubuntu/ trusty/main amd64 Packages

ubuntu@ip-xxxxx:/usr/bin$ apt-cache policy libssl-dev libssl-dev: Installed: 1.0.2h-1+deb.sury.org~trusty+1 Candidate: 1.0.2h-1+deb.sury.org~trusty+1 Version table: *** 1.0.2h-1+deb.sury.org~trusty+1 0 500 http://ppa.launchpad.net/ondrej/php5/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status 1.0.1f-1ubuntu2.19 0 500 http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages 500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages 1.0.1f-1ubuntu2 0 500 http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages

Can someone please help me tell Apache/PHP about the new OpenSSL installation? Thanks!

How to properly configure PAC file?

Posted: 14 Aug 2021 09:07 PM PDT

I have a squid + diladele proxy box in my network. I have setup a PAC file that should do the following:

1)If the ip address of the client belongs to the current network (192.168.0.0/24) and tries to access a resource outside the network use the proxy. 2)If the client is trying to access an internal resource, give direct access and bypass proxy

Here is what I wrote so far

// If the IP address of the local machine is within a defined  // subnet, send to a specific proxy.      if (isInNet(myIpAddress(), "192.168.0.0", "255.255.255.0"))          return "PROXY 192.168.0.253:3128";    // If the requested website is hosted within the internal network, send direct.      if (isPlainHostName(host) ||          shExpMatch(host, "*local") ||          isInNet(dnsResolve(host), "192.168.0.0","255.255.0.0") ||          isInNet(dnsResolve(host), "127.0.0.1", "255.255.255.255")||          shExpMatch(host,"localhost"))          return "DIRECT";  // DEFAULT RULE: All other traffic, use below proxies, in fail-over order.          return "DIRECT";  

Everything works perfectly, however when I try to access a resource on localhost ( I have a lamp stack on my device ) for some reason I get redirected to my proxy web interface (192.168.0.253). What am I doing wrong?

Using Tomcat behind Apache2 Http with different context paths

Posted: 14 Aug 2021 10:04 PM PDT

On our Ubuntu webserver we have a Apache2 HTTP server in conjunction with an JSF application running on an Tomcat8 application server using AJP 1.3 connector and HTTPS/SSL. I want my app which runs on localhost:8009/myApp/ to be accessible from https://subdomain.domain.com (subdomain and domain are palceholders of course). In other words, I want different context paths (/ on apache2, /myApp on tomcat)

Now I'm facing the problem that - althougth the welcome-page is accessible - all resources/images/links are broken as they still contain the context path /myApp. I've tried to set up corresponding ProxPass/ReverseProxyPass settings without success.

<VirtualHost _default_:443>          ServerAdmin admin@domain.com          DocumentRoot /srv/www/subdomain.domain.com          ServerName subdomain.domain.com            RewriteEngine On          RewriteCond %{HTTP_HOST} !^subdomain\.domain\.com$ [NC]          RewriteRule .? https://subdomain.domain.com%{REQUEST_URI} [R=301,L]            <Location />                  ProxyPass ajp://localhost:8009/myApp/ connectiontimeout=5 timeout=300                  ProxyPassReverse https://localhost:8080/myApp/                  ProxyPassReverse https://subdomain.domain.com/myApp/                  ProxyPassReverse ajp://localhost:8009/myApp/                  ProxyPassReverseCookiePath /myApp/ /                    #Order deny,allow                  Allow from all          </Location>    </VirtualHost>  

PS: As a workaround, myApp currently runs on the root-context "/" on tomcat, but I want to change that to accomondate multiple web apps.

In tomcat's conf/server.xml I have the following connector configured:

<!-- Define an AJP 1.3 Connector on port 8009 -->  <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"             address="127.0.0.1"             proxyName="subdomain.domain.com" proxyPort="443" secure="true" />  

Restore deleted exchange 2007 public folder from backup .edb file

Posted: 14 Aug 2021 08:01 PM PDT

We are running a stand-alone instance of Exchange 2007 without replication of any kind. We do have nightly backups. A user deleted a public folder, and I need to restore that from one of our full database backups (I have the .edb file).

I have tried creating another storage group, but when I try to create another public folder database, I get an error stating there can only be one public folder database. I also tried using the Recovery Storage Group, but learned that is only usable for mailbox restores. My next thought was to spin up a new Exchange VM and somehow copy it over from there, but I'm not sure if that's best...or how exactly to do it.

What are my best options?

Nginx URL virtual host rewrite issues with Magento e-commerce

Posted: 14 Aug 2021 03:02 PM PDT

I've been running into some problems with my URL rewrites. When I click a link in my Magento back-end it completely messes up the URL.

We start with this link:

http://icanttellmydomain.nl/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b09‌​7fcb6b/

But we are redirected here:

http://icanttellmydomain.nl/index.php/paneel/permissions_user/index/key/index.php/paneel/system_config/index/key/4015c27aea900ad7fceb13e27b76560c/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index...............

It keeps repeating 'index.php' and the URL's path, looping until it gives me a 500 internal error or "The page isn't redirecting properly".

I'm pretty sure it has to do with my vhost configuration. I tried commenting:

 #Forward paths like /js/index.php/x.js to relevant handler   #   location ~ .php/ {   #       rewrite ^(.*.php)/ $1 last;   #   }   

but it didn't do the trick.

My Vhost:

server {        listen   80; ## listen for ipv4; this line is default and implied        listen   [::]:80 default_server ipv6only=on; ## listen for ipv6        listen 443 default ssl;    root /usr/share/nginx/www/xxxxxxxx/public/;  index index.html index.htm;    # Make site accessible from http://<serverip/domain>/  server_name xxx.xxx.xxx.xxx;    error_log  /var/log/nginx/error.log; #warn; #op warn niveau word er logged  #access_log off; #Disabled voor I/O besparing  access_log /var/log/nginx/access.log;    location / {     index index.html index.php;     #autoindex on;    ## If missing pass the URI to Magento's front handler     try_files $uri $uri/ @handler;     expires max; ##   }        ## These locations need to be denied      location ^~ /app/                { deny all; }      location ^~ /includes/           { deny all; }      location ^~ /lib/                { deny all; }      location ^~ /media/downloadable/ { deny all; }      location ^~ /pkginfo/            { deny all; }      location ^~ /report/config.xml   { deny all; }      location ^~ /var/                { deny all; }      ## Disable .htaccess and other hidden files  location  /. {   access_log off;   log_not_found off;   return 404;   deny all;  }    ## Magento uses a common front handler      location @handler {          rewrite / /index.php;      }    #Forward paths like /js/index.php/x.js to relevant handler  #   location ~ .php/ {  #       rewrite ^(.*.php)/ $1 last;  #   }    ##Rewrite for versioned CSS+JS via filemtime(file modification time)  location ~* ^.+\.(css|js)$ {  rewrite ^(.+)\.(\d+)\.(css|js)$ $1.$3 last;  expires 31536000s;  access_log off;  log_not_found off;  add_header Pragma public;  add_header Cache-Control "max-age=31536000, public";  }    ## php-fpm parsing  location ~ \.php.*$ {    ## Catch 404s that try_files miss  if (!-e $request_filename) { rewrite / /index.php last; }    ## Disable cache for php files  expires        off;    ## php-fpm configuration  fastcgi_pass   unix:/var/run/php5-fpm.sock;  fastcgi_param  HTTPS $https if_not_empty;  fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;  include        fastcgi_params;    ## Store code is located at Administration > Configuration > Manage Stores   fastcgi_param  MAGE_RUN_CODE default;  fastcgi_param  MAGE_RUN_TYPE store;    ## Tweak fastcgi buffers, just in case.  fastcgi_buffer_size 128k;  fastcgi_buffers 256 4k;  fastcgi_busy_buffers_size 256k;  fastcgi_temp_file_write_size 256k;  

Thanks for reading! I'm new to all this stuff so take that into consideration in your replies please.

VMM 2012 Error 20552 - For ISO share VMM does not have appropriate permissions to access the resource

Posted: 14 Aug 2021 05:03 PM PDT

I have included an ISO network share in my VMM 2012 library by:

  1. Library servers -> Add Library Share -> Add Unmanaged Share.
  2. I then selected the file share e.g \fs1\ISO
  3. I set the share permissions on \fs1\ISO to everyone FULL
  4. I set the NTFS permissions to read-only for the following AD accounts:
    • VMM service account
    • VMM Library account
    • HV target host machine account
    • Network service

The problem I have is I still get error the following error regarding permissions:

Error (20552) VMM does not have appropriate permissions to access the resource \\fs1.domain.local\ISO\Zabbix_2.0_x86.i686-0.0.1.preload.iso on the scvmma1.domain.local server.

Ensure that Virtual Machine Manager has the appropriate rights to perform this action. Also, verify that CredSSP authentication is currently enabled on the service configuration of the target computer scvmma1.domain.local. To enable the CredSSP on the service configuration of the target computer, run the following command from an elevated command line: winrm set winrm/config/service/auth @{CredSSP="true"}

I have also set the command on the VMM server winrm set winrm/config/service/auth @{CredSSP="true"} but no joy.

Any ideas please?

default session file Permits

Posted: 14 Aug 2021 09:07 PM PDT

i need to edit the default permits of the session file

i know it's high security risk,

The default permits is :600

[root@server sessions]# stat sess_06pqdthgi49oq7jnlvuvsr95q1    File: `sess_06pqdthgi49oq7jnlvuvsr95q1'    Size: 0               Blocks: 0          IO Block: 4096   regular empty file  Device: 802h/2050d      Inode: 32473090    Links: 1  Access: (0600/-rw-------)  Uid: ( 5003/     ...)   Gid: ( 5003/     ...)  

i want set default permits to 0777

This is my php.in

; The file storage module creates files using mode 600 by default.  ; You can change that by using  ;  ;     session.save_path = "N;MODE;/path"  ;  ; where MODE is the octal representation of the mode. Note that this  ; does not overwrite the process's umask.  ;session.save_path = "/var/lib/php/session"  session.save_path = "/sessions"  

i've changed the session.save_path to

session.save_path = "N;644;/sessions"  

The new results is:

  File: `sess_avrc5442qjkcbd17g2qkenmit2'    Size: 0               Blocks: 0          IO Block: 4096   regular empty file  Device: 802h/2050d      Inode: 32473090    Links: 1  Access: (0700/-rwx------)  Uid: ( 5003/     ...)   Gid: ( 5003/     ...)  

it's now 0700 NOT 0777

WHY?

exe hangs when scheduled from SQL Agent, but fine when run by user

Posted: 14 Aug 2021 04:00 PM PDT

I have a SQL Agent job on a clustered SQL 2008R2 server on Windows 2003 Enterprise. It's an Operating System (CmdExec) step, running an executable. When it runs on schedule, the process does start up, and the job shows as running. However, it never completes the job.

When i run the executable interactively, i.e. I double click it, it runs and it completes its processing in 10 mins or so as expected.

I've monitored the exe with procmon when it hangs, and it logs no errors, it just stops processing (but the exe is still running)

I'm 90% sure this is something to do with the user account running SQL agent, and the local security policy. The account has all the privileges I think it needs - log on as service, log on as batch, etc. I think over the testing period I've pretty much assigned it every right in the policy.

Any ideas why an exe would run fine interactively but fail with SQL agent?

How to monitor mysql slow log and send mail to alert?

Posted: 14 Aug 2021 10:04 PM PDT

I have enabled mysql slow query log on Ubuntu server. I prefer to get the email alert with the slow sql when any slow query appeared so I can optimize the sql. I need a lightweight solution.

CGI error from PHP when running exec() on IIS

Posted: 14 Aug 2021 05:03 PM PDT

Windows Server 2003 x64

PHP 5.2

IIS 6.0

The program Ink2Png.exe is set with Everyone->Read and Execute permissions.

As does its dependency (microsoft.ink.dll)

PHP Safe Mode is off

exec() is passed [the full exe path], space, [full path to another file]

This other file also has full read permissions.

The output directory has full write permissions.

As soon as exec() is hit, the connection dies, the browser does not even receive a full set of http headers, and it reports a CGI error. Examining the output, it appears the program was not even run.

Any ideas? How can I figure out what exactly is happening and get it running again?

EDIT: Also, it is a .NET application, if that is significant in any way.

How to recover from "Too many Authentication Failures for user root"

Posted: 14 Aug 2021 07:12 PM PDT

I've done several attempts to establish SSH-connecton for user root@host using putty terminal. While doing so I specified wrong credentials several times and after that I've specified them correctly, and then after the credentials were accepted the ssh session breaks with

"Server unexpectedly closed network connection".

This error is reported by putty terminal. When trying to ssh root@localhost from the local console - it works fine. It also works fine when I ssh otheruser@host from other host. So network connectivity issues are not guilty. The only error I am thinking of is: "Too many Authentication Failures for user root" although putty reported a different error.

The question is: how to recover from this error condition and let putty login again? Restarting sshd seems to not help

1 comment:



  1. Pretty great post. I simply stumbled upon your blog and wanted to mention that I have really loved surfing around your blog posts. Great set of tips from the master himself. Excellent ideas. Thanks for Awesome tips Keep it
    topaz-studio-crack
    rdpguard-crack
    traktor-pro-crack
    abelssoft-ssd-fresh-crack

    ReplyDelete