Thursday, April 1, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


mysql 8.0.23 enterprise backup issue

Posted: 01 Apr 2021 10:24 PM PDT

MySQL backup issue find the below error while doing from workbench ? error-main error unknown option --h MySQL backup exit code :7,invalid arguments MySQL backup failed with errors

Script without put ran as administrator leaves no file

Posted: 01 Apr 2021 09:36 PM PDT

I am trying to run this script as administrator. When I run it without admin rights the log file is save in C:\Windows\System32. I put echo %~dp0 in the script and it echos to the current directory.

%~dp0  SC QUERY | FINDSTR "Fax"   IF ERRORLEVEL 0 SC CONFIG Fax START= DISABLED >> log.txt  

If I hard set the log file as this:

%~dp0  SC QUERY | FINDSTR "Fax"   IF ERRORLEVEL 0 SC CONFIG Fax START= DISABLED >> C:\logs\log.txt  

I can get a log in that location. But I only need to run it on machines with issues and I would like to have the file on the USB.

How to instruct Postfix to block a specific Hash (to protect against a known IOC attack) of attached files or content?

Posted: 01 Apr 2021 08:22 PM PDT

an attack is ongoing and we want to block at the relay mail level, some Hashes (of attached files or even the email content) to be sent or received. I want to instruct postfix to reject mail with those hashes, how can i do this ? i tried and searched but didn't succed.

Thank you so much in advance.

OOM killer invoked gfp_mask=0x24201ca

Posted: 01 Apr 2021 07:29 PM PDT

I am trying to find out what causes memory to overload OOM killer to perform it actions. It is coming once in around three hours, average live client count is around 300, and I cannot figure out what is the trigger in this cause, because oom-killer can be called from different processes starting from server-host process, to beamium and noderig, the server-host process also fills up the memory even tho clients might disconnect from the server/the count of them is lowering. Looking for extra knowledge because I am just out of ideas, any way to debug deeper? Maybe the problem is around swap? free -h information

System log:

Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:366872)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:502088)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:502080)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:501048)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:501040)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:501032)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:501024)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:502104)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:502096)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:367496)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:336808)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:336792)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:336784)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:336768)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:848456)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:845352)  Mar 31 00:26:44 pashamachine kernel: Write-error on swap-device (259:1:464)  Mar 31 00:26:47 pashamachine server[5514]: [N] IPaddress:62588 has been disconnected.  Mar 31 00:26:58 pashamachine server[5514]: [N] IPaddress:60275 has been disconnected.  Mar 31 00:27:11 pashamachine kernel: mysqld invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=0, order=0, oom_score_adj=0  Mar 31 00:27:11 pashamachine kernel: mysqld cpuset=/ mems_allowed=0  Mar 31 00:27:11 pashamachine kernel: CPU: 4 PID: 2517 Comm: mysqld Not tainted 4.9.168-xxxx-std-ipv6-64 #665790  Mar 31 00:27:11 pashamachine kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C242D4U2-2T, BIOS L0.09E 03/14/2019  Mar 31 00:27:11 pashamachine kernel: ffffb5138692b9e0 ffffffffaf679ef7 ffffb5138692bb98 ffff9f160a8cc380  Mar 31 00:27:11 pashamachine kernel: ffffb5138692ba58 ffffffffaf22fd97 0000000000000000 0000000000000000  Mar 31 00:27:11 pashamachine kernel: 0000000000000000 0000000000000000 0000000000000000 0000000000000000  Mar 31 00:27:11 pashamachine kernel: Call Trace:  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf679ef7>] dump_stack+0x4d/0x66  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf22fd97>] dump_header+0x76/0x1f1  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1bf37e>] oom_kill_process+0x20e/0x3e0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1bf87b>] out_of_memory+0x11b/0x4a0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1c3f44>] __alloc_pages_slowpath+0x994/0xb80  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1c42e7>] __alloc_pages_nodemask+0x147/0x1d0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf20d59e>] alloc_pages_current+0x9e/0x150  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1bad93>] __page_cache_alloc+0xa3/0xe0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1bbc38>] ? pagecache_get_page+0x28/0x220  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1bc12d>] filemap_fault+0x2fd/0x4a0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf30cec1>] ext4_filemap_fault+0x31/0x50  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1e7a53>] __do_fault+0xa3/0x1a0  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf1eb43f>] handle_mm_fault+0xb6f/0x1120  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf09ea4d>] __do_page_fault+0x22d/0x450  Mar 31 00:27:11 pashamachine kernel: [<ffffffffaf09ecac>] do_page_fault+0xc/0x10  Mar 31 00:27:11 pashamachine kernel: [<ffffffffafeac282>] page_fault+0x22/0x30  Mar 31 00:27:11 pashamachine kernel: Mem-Info:  Mar 31 00:27:11 pashamachine kernel: active_anon:7602864 inactive_anon:447235 isolated_anon:0  Mar 31 00:27:11 pashamachine kernel: active_file:162 inactive_file:347 isolated_file:0  Mar 31 00:27:11 pashamachine kernel: unevictable:1 dirty:0 writeback:0 unstable:0  Mar 31 00:27:11 pashamachine kernel: slab_reclaimable:5973 slab_unreclaimable:19315  Mar 31 00:27:11 pashamachine kernel: mapped:964 shmem:79951 pagetables:17086 bounce:0  Mar 31 00:27:11 pashamachine kernel: free:50740 free_pcp:387 free_cma:0  Mar 31 00:27:11 pashamachine kernel: Node 0 active_anon:30411456kB inactive_anon:1788940kB active_file:728kB inactive_file:680kB unevictable:4kB isolated(anon):0kB isolated(file):0kB mapped:4028kB dirty:0kB writeback:0kB shmem:319804kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 9269248kB writeback_tmp:0kB unstable:0kB pages_scanned:178 all_unreclaimable? no  Mar 31 00:27:11 pashamachine kernel: Node 0 DMA free:15896kB min:32kB low:44kB high:56kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15896kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB  Mar 31 00:27:11 pashamachine kernel: lowmem_reserve[]: 0 2017 31924 31924  Mar 31 00:27:11 pashamachine kernel: Node 0 DMA32 free:123888kB min:4268kB low:6332kB high:8396kB active_anon:1767176kB inactive_anon:241356kB active_file:220kB inactive_file:600kB unevictable:0kB writepending:0kB present:2140472kB managed:2140468kB mlocked:0kB slab_reclaimable:252kB slab_unreclaimable:208kB kernel_stack:64kB pagetables:3928kB bounce:0kB free_pcp:764kB local_pcp:0kB free_cma:0kB  Mar 31 00:27:11 pashamachine kernel: lowmem_reserve[]: 0 0 29907 29907  Mar 31 00:27:11 pashamachine kernel: Node 0 Normal free:63224kB min:63280kB low:93904kB high:124528kB active_anon:28644280kB inactive_anon:1547584kB active_file:360kB inactive_file:524kB unevictable:4kB writepending:0kB present:31178752kB managed:30628620kB mlocked:4kB slab_reclaimable:23640kB slab_unreclaimable:77052kB kernel_stack:7088kB pagetables:64416kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB  Mar 31 00:27:11 pashamachine kernel: lowmem_reserve[]: 0 0 0 0  Mar 31 00:27:11 pashamachine kernel: Node 0 DMA: 2*4kB (U) 2*8kB (U) 0*16kB 2*32kB (U) 3*64kB (U) 2*128kB (U) 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15896kB  Mar 31 00:27:11 pashamachine kernel: Node 0 DMA32: 14*4kB (UME) 50*8kB (UM) 88*16kB (UME) 124*32kB (UME) 68*64kB (UME) 48*128kB (UME) 19*256kB (UE) 12*512kB (UE) 7*1024kB (UE) 2*2048kB (UM) 21*4096kB (UMH) = 124616kB  Mar 31 00:27:11 pashamachine kernel: Node 0 Normal: 473*4kB (UME) 261*8kB (UMEH) 269*16kB (UMEH) 234*32kB (UMEH) 170*64kB (UMEH) 81*128kB (UMEH) 53*256kB (UMEH) 23*512kB (UME) 1*1024kB (H) 0*2048kB 0*4096kB = 63388kB  Mar 31 00:27:11 pashamachine kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB  Mar 31 00:27:11 pashamachine kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB  Mar 31 00:27:11 pashamachine kernel: 231789 total pagecache pages  Mar 31 00:27:11 pashamachine kernel: 151263 pages in swap cache  Mar 31 00:27:11 pashamachine kernel: Swap cache stats: add 839809, delete 688546, find 3766057/3789646  Mar 31 00:27:11 pashamachine kernel: Free swap  = 0kB  Mar 31 00:27:11 pashamachine kernel: Total swap = 1046520kB  Mar 31 00:27:11 pashamachine kernel: 8333804 pages RAM  Mar 31 00:27:11 pashamachine kernel: 0 pages HighMem/MovableOnly  Mar 31 00:27:11 pashamachine kernel: 137558 pages reserved  Mar 31 00:27:11 pashamachine kernel: 0 pages hwpoisoned  Mar 31 00:27:11 pashamachine kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name  Mar 31 00:27:11 pashamachine kernel: [  480]     0   480     9091      900      15       3       79             0 systemd-journal  Mar 31 00:27:11 pashamachine kernel: [  508]     0   508     4697       14       9       3      289         -1000 systemd-udevd  Mar 31 00:27:11 pashamachine kernel: [  513]   101   513     3563       57      10       3      105             0 systemd-network  Mar 31 00:27:11 pashamachine kernel: [  681]     0   681      810       62       5       3       28             0 mdadm  Mar 31 00:27:11 pashamachine kernel: [  743]   100   743    21772       29      12       3       99             0 systemd-timesyn  Mar 31 00:27:11 pashamachine kernel: [  773]   106   773   255875     8927      77       4     4618             0 named  Mar 31 00:27:11 pashamachine kernel: [  780]     0   780     2561       65       8       3       63             0 irqbalance  Mar 31 00:27:11 pashamachine kernel: [  798]     0   798     2437       25       8       3       42             0 cron  Mar 31 00:27:11 pashamachine kernel: [  802]     0   802     2964       39       9       3      105             0 systemd-logind  Mar 31 00:27:11 pashamachine kernel: [  806]   107   806     2183      120       7       3       44          -900 dbus-daemon  Mar 31 00:27:11 pashamachine kernel: [  812]   108   812    23477     2583      50       4    10441             0 beamium  Mar 31 00:27:11 pashamachine kernel: [  837]     0   837     6834       32      16       3      157         -1000 sshd  Mar 31 00:27:11 pashamachine kernel: [  842]     0   842   498754    22631     120       7     4345             0 noderig  Mar 31 00:27:11 pashamachine kernel: [  846]     0   846     1656        0       7       3       30             0 agetty  Mar 31 00:27:11 pashamachine kernel: [  848]     0   848     1575        0       7       3      116             0 login  Mar 31 00:27:11 pashamachine kernel: [  850]     0   850     1656        0       8       3       30             0 agetty  Mar 31 00:27:11 pashamachine kernel: [ 1679]     0  1679     2013        1       6       3      145             0 screen  Mar 31 00:27:11 pashamachine kernel: [ 1680]     0  1680     2059        1       7       3      135             0 bash  Mar 31 00:27:11 pashamachine kernel: [ 2296]     0  2296     2059        1       7       3      144             0 bash  Mar 31 00:27:11 pashamachine kernel: [30306]     0 30306     2015        1       7       3      149             0 screen  Mar 31 00:27:11 pashamachine kernel: [30307]     0 30307     2059        1       7       3      137             0 bash  Mar 31 00:27:11 pashamachine kernel: [30308]   109 30308  2983591  1376774    3615      16   237632             0 mysqld  Mar 31 00:27:11 pashamachine kernel: [ 6887]     0  6887    57103      401      16       4       83             0 rsyslogd  Mar 31 00:27:11 pashamachine kernel: [ 5514]     0  5514  8497142  6404668   12981      48        0             0 server  Mar 31 00:27:11 pashamachine kernel: [ 7436]     0  7436     6954      229      17       3        0             0 sshd  Mar 31 00:27:11 pashamachine kernel: [ 7443]     0  7443      608       24       5       3        0             0 sftp-server  Mar 31 00:27:11 pashamachine kernel: [ 7753]     0  7753     6955      253      17       3        0             0 sshd  Mar 31 00:27:11 pashamachine kernel: [ 7759]     0  7759     2059      140       7       3        0             0 bash  Mar 31 00:27:11 pashamachine kernel: [ 8685]     0  8685     3769      206      11       3        0             0 top  Mar 31 00:27:11 pashamachine kernel: Out of memory: Kill process 5514 (server) score 736 or sacrifice child  Mar 31 00:27:11 pashamachine kernel: Killed process 5514 (server) total-vm:33988568kB, anon-rss:25618672kB, file-rss:0kB, shmem-rss:0kB  

How do I get TLS 1.2 working on windows server 2008 SP2?

Posted: 01 Apr 2021 08:12 PM PDT

https://sparkleflooring.com.au

Windows Server 2008 Standard    Version 6.0 (Build 6003: Service Pack 2)    IIS 7    

Background: I have been working on installing a certificate on my local website http://sparkleflooring.com.au and found out the hard way that the certificate clients don't work on server 2008 for x86 anymore. Finally got the certificate installed by using a ubuntu host and copying it across using https://rajbos.github.io/blog/2019/08/27/LetsEncrypt-Windows.

Now: I think the certificate is installed correctly but don't know how to test it. Im getting these ssl errors which lead me to the information that TLS is required but not installed on 2008 server. I tried the suggested patch https://www.catalog.update.microsoft.com/search.aspx?q=kb4019276 but it said it's not suitable for my system. I then used a 3rd party stand alone program to add the keys but still it doesn't work. I have also checked the date and time which is correct.

Internet Explorer

This page can't be displayed      Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in Advanced settings and try connecting to https://sparkleflooring.com.au  again. If this error persists, it is possible that this site uses an unsupported protocol or cipher suite such as RC4 (link for the details), which is not considered secure. Please contact your site administrator.   

Firefox:

Secure Connection Failed    An error occurred during a connection to sparkleflooring.com.au. SSL received a record that exceeded the maximum permissible length.    Error code: SSL_ERROR_RX_RECORD_TOO_LONG  

Chrome:

This site can't provide a secure connection sparkleflooring.com.au sent an invalid response.      Try running Windows Network Diagnostics.      ERR_SSL_PROTOCOL_ERROR  

.
PowerShell OpenSSL
I have no idea what this means but I found a post saying to check openssl errors so here it is.

PS C:\Program Files (x86)\OpenSSL-Win32\bin> ./openssl s_client -connect    sparkleflooring.com.au:443  CONNECTED(00000104)  3780:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:ssl\record\ssl3_record.c:332:  ---  no peer certificate available  ---  No client certificate CA names sent  ---  SSL handshake has read 5 bytes and written 324 bytes  Verification: OK  ---  New, (NONE), Cipher is (NONE)  Secure Renegotiation IS NOT supported  Compression: NONE  Expansion: NONE  No ALPN negotiated  Early data was not sent  Verify return code: 0 (ok)  ---  PS C:\Program Files (x86)\OpenSSL-Win32\bin>  

Use SNI with pip/pypi on Ubuntu 14 Trusty

Posted: 01 Apr 2021 06:30 PM PDT

pip no longer works on old versions of Python 2, because https://pypi.python.org/ now requires Server Name Indication (SNI), which isn't available in Python 2.7.6, the version of Python that comes with Ubuntu 14 Trusty.

https://github.com/pypa/pypi-support/issues/978 explains:

Upgrading to the last Python 2.7 release is an option.

However, note that Python 2.7 series itself is now End of Life and support in pip was dropped with version 21.0.

I'm maintaining a legacy code base; I need to pursue the least invasive option possible to get pip working again. What are my options? What's the smallest possible change I can make to get back to (temporarily) working order?

cron is not restarting pm2

Posted: 01 Apr 2021 06:22 PM PDT

I've tried many things but cron just won't restart pm2. I set the crontab -e with:

SHELL=/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin    */10 * * * *  /usr/bin/node /usr/bin/pm2 restart all  

And the I type: cat /var/log/cron.log

And it says:

Apr 2 01:00:01 fatsecret CRON[8202]: (peteblank) CMD (/usr/bin/node /usr/bin/pm2 restart all) Apr 2 01:00:01 fatsecret CRON[8201]: (peteblank) MAIL (mailed 78 bytes of output but got status 0x004b from MTA#012) Apr 2 01:10:01 fatsecret CRON[8454]: (peteblank) CMD (/usr/bin/node /usr/bin/pm2 restart all) Apr 2 01:10:01 fatsecret CRON[8453]: (peteblank) MAIL (mailed 78 bytes of output but got status 0x004b from MTA#012)

So it looks like its restarting every 10 minutes, but its not.

Nginx + uWSGI + Flask Connection Reset

Posted: 01 Apr 2021 05:07 PM PDT

Problem

I have a Flask app deployed using Elastic Beanstalk's "Single Container Docker" platform (latest revision 3.2.5 at the time of writing), with an "Application Load Balancer" in front of it. I had this same Flask app deployed in EB with the "Python 3.6" platform (and a "Classic Load Balancer") for ages, but have since started having issues after upgrading to the new deployment. I am a relative novice when it comes to configuring Nginx / uWSGI, bear with me...

Specific Issue

I see the following errors in my Nginx error.log file on ~0.01% of the requests my environment handles:

<timestamp> [error] 22400#0: *52192 upstream prematurely closed connection while reading response header from upstream, client: <ip>, server: , request: "POST <endpoint> HTTP/1.1", upstream: "http://<ip>:5000/<endpoint>", host: "<my hostname>"  ...  <timestamp> [error] 22400#0: *101979 readv() failed (104: Connection reset by peer) while reading upstream, client: <ip>, server: , request: "POST <endpoint> HTTP/1.1", upstream: "http://<ip>:5000/<endpoint>", host: "<my hostname>"  

I see these errors across requests to different endpoints, using different HTTP methods (GET and POST), and at seemingly random times. Additionally, I do not see any application errors in my Flask app logs, which indicates that this is not an application issue but rather a configuration one.

Discussion

I ended up reading and trying a lot of stuff, so I'll recount my experience for posterity. The answer I arrived at seems so simple that I'm still suspicious that I've got it right.

From the reading I've done, this sounded like a pretty straightforward issue with some misconfigured timeouts between Nginx + uWSGI. I was encouraged after reading this post which describes almost my exact situation with Elastic Beanstalk.

Part 1: Semi-Random Flailing

In the numerous and varied answers on this post I found some things to try:

  1. I tried setting the uWSGI parameter post-buffering = 32768 since people suggested that. It did not help, which makes sense because the setting applies only to requests with a Body and I had been observing the aforementioned errors on GET requests as well.
  2. I tried playing with Nginx's keepalive + keepalive_timeout and uWSGI's so-keepalive, http-timeout, and socket-timeout.

I realized from reading the docs that these uWSGI settings definitely weren't going to help, although I held out hope for so-keepalive.

At this point I did notice a relatively significant decrease in the frequency of these errors, but they did not go away altogether. Like a bad engineer, I changed multiple variables at once in some of these trials. Thus, it's hard to know exactly what helped. I suspect I made things better by setting Nginx keepalive to a number of connections <= what I saw was the maximum connections it could handle in the uWSGI log (100 connections). Anyone else's insight on that one is welcome, albeit there's not much to go on...

Part 2: A fix, I think...

I decided to try overriding the default upstream definition Elastic Beanstalk puts into the Nginx config. The original looked like this:

upstream docker {      server <some ip>:5000;      keepalive 256;  }  

All I did was replace this with my own upstream, change the Nginx location to point at my custom upstream (below) and simply not set the keepalive parameter. Like so:

upstream myapp {      server <some ip>:5000;  }  ...  location / {      # proxy_pass http://docker;      proxy_pass http://myapp;  }  

This seems to work... Since putting in the change I have basically seen zero 5xx errors in my Elastic Beanstalk environment. The fact that this works also seems to be corroborated by this answer which mentions:

... a uwsgi connection isn't reusable and so it gets closed and a new connection opened with every request; it wouldn't remain idle for any significant amount of time in normal operation.

I'm not sure where that is documented, but I didn't notice it when reading about using uWSGI + Nginx. If that's accurate, it certainly explains a lot.

Conclusion / Help?

I'm really glad I was able to figure this one out and the API seems to be working really well, but I can't kick the feeling that I don't understand why this works or I've committed some grave sin with this configuration.

It felt a bit cumbersome to override this stuff in Elastic Beanstalk, which makes me think I shouldn't have. With the popularity of uWSGI for python webapps, my spidey-sense is telling me that there should have already been numerous posts about this Nginx keepalive playing poorly with uWSGI. Especially since that's in the default configuration for this Elastic Beanstalk platform.

If you've read this far and know things, feel free to weigh in on the situation. Hopefully, at the very least, the next person to see those errors in their Nginx logs has another data point as to what the problem could be.

Azure File "System error 1396 has occurred. The target account name is incorrect."

Posted: 01 Apr 2021 04:39 PM PDT

I'm testing out deploying out Azure File using AD DS permissions. I was able to sync our onprem file server to Azure. I am able to mount/map the drive using "net use \storageaccountname.file.core.windows.net\filesharename STORAGEACCOUNTKEY /user:Azure\storageaccountusername

However when I try to map the drive not using "STORAGEACCOUNTKEY /user:Azure\storageaccountusername" I get the message

System error 1396 has occurred. The target account name is incorrect

Anyone seen this issue before?

How can I remove all cookies except session cookies from nginx responses?

Posted: 01 Apr 2021 04:02 PM PDT

I'm serving several WordPress sites via nginx & PHP-FPM. Sometimes plugins randomly set cookies that are unwanted, and that do not have consent. For those, and for privacy in general, I want to suppress all cookies except those that are needed to support admin logins, i.e. session cookies. I don't know the names, paths or domains of the cookies that are set ahead of time. Essentially if it's a Set-Cookie header containing Expires, it needs to die.

I've seen alternatives where configs set new cookies that have the same names but immediate expiry times, but I don't want these cookies to ever get as far as the client.

I have looked at the stock nginx config options and that doesn't seem to be possible – though it's very easy to set more! The nginx headers_more extension has slightly more power in its more_clear_headers directive, but it won't unset based on regular expressions, only simple wildcards; I can't simply search for Expires because that occurs in other headers that are needed.

So I'm wondering if I need to dive into Lua scripting to get nginx to do this, which I have no idea how to do!

Any better ideas how to do this?

Debian Gnome display - unable to create directory '/run/user/1001/dconf

Posted: 01 Apr 2021 03:52 PM PDT

In order to Show Gnome Version from ssh commandline i get

user@debian:$ gnome-session --version

(process:6888): dconf-CRITICAL **: 22:45:31.084: unable to create directory '/run/user/1001/dconf': Permission denied.  dconf will not work properly.    (process:6888): dconf-CRITICAL **: 22:45:31.085: unable to create directory '/run/user/1001/dconf': Permission denied.  dconf will not work properly.    ** (process:6887): WARNING **: 22:45:31.117: Could not make bus activated clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable: Could not connect: Permission denied  X11 connection rejected because of wrong authentication.  X11 connection rejected because of wrong authentication.  

user@debian:~$ sudo gnome-session --version

X11 connection rejected because of wrong authentication.  X11 connection rejected because of wrong authentication.  X11 connection rejected because of wrong authentication.  X11 connection rejected because of wrong authentication.  

how can this issue be fixed ?

How does Cloudflare Firwall rules order work?

Posted: 01 Apr 2021 03:51 PM PDT

how does the order of Cloudflare rules work? I read the documentation but doesn't explain it (it's self explanatory apparently) but is it?

For example if i want to block a ASN but allow an IP inside of it, what order should i use?

ORDER A:

Rule 1: Block ASN  Rule 2: Allow IP  

ORDER B:

Rule 1: Allow IP  Rule 2: Block ASN  
  • Does it stop after the first matching rule? So solution is B.

Or

  • Does it check all rules and THEN blocks? If so, solution is A.

I think this is very vital information to know.

Sent items do not appear in the "Sent Items" folder in Outlook for Desktop, Outlook Mobile (iOS), or Outlook 365

Posted: 01 Apr 2021 07:39 PM PDT

One of our users is no longer able to see any mail in his "Sent Items" folder. When he sends out a message it sits in the Outbox until he manually selects "Send/Receive" from the menu. I checked the Outlook Admin Center and confirmed the messages are sending, but they do not appear under "Sent Items" on Desktop, Mobile (iOS), or Online.

I did confirm that he has the app configured to save sent mail, and his Windows edition does not have group policy settings. Microsoft says the Exchange service is up and no other users have reported an issue.

Can anyone provide guidance as to where I should look next?

Cron is not running from docker container... failed

Posted: 01 Apr 2021 06:00 PM PDT

I am trying create cron a task in docker container. Everything are configured according to the @VonC 's answer
My dockerfile looks like this

FROM python:3.6.9FROM python:3.6.9      WORKDIR usr/src/mydir  COPY requirements.txt .    # Add crontab file in the cron directory  ADD crontab /etc/cron.d/hello-cron    # Give execution rights on the cron job  RUN chmod 0644 /etc/cron.d/hello-cron    # Create the log file to be able to run tail  RUN touch /var/log/cron.log    #Install Cron  RUN apt-get update  RUN apt-get -y install cron    # Run the command on container startup  CMD cron && tail -f /var/log/cron.log    RUN pip install --no-cache-dir -r requirements.txt  COPY . .  

But the cron service doesn't start up by default

[FAIL] cron is not running ... failed!  

the cron service starts work after pushing it explicitly from the container

service cron start  

what's wrong?

IIS URL Rewrite - Redirect root to subfolder

Posted: 01 Apr 2021 05:03 PM PDT

I want: http://somesite.com to redirect to http://somesite.com/subfolder

Seems like a pretty simple request. I've followed the sources online, and they all indicate I should use ^$ for the regex pattern. I've also added an HTTP to HTTPS redirect, and it works fine. I've also tried disabling that rule just to make sure it wasn't interfering. This is running on IIS 10 / Server 2016.

My web.config looks like this:

<?xml version="1.0" encoding="UTF-8"?>  <configuration>      <system.webServer>          <rewrite>              <rules>                  <clear />                  <rule name="Redirect root to NmConsole" stopProcessing="true">                      <match url="^$" ignoreCase="true" />                      <conditions logicalGrouping="MatchAll" trackAllCaptures="false" />                      <action type="Redirect" url="/subfolder" appendQueryString="true" />                  </rule>                  <rule name="Redirect to HTTPS" enabled="true" stopProcessing="true">                      <match url="(.*)" />                      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                          <add input="{HTTPS}" pattern="^OFF$" />                      </conditions>                      <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" />                  </rule>              </rules>          </rewrite>      </system.webServer>  </configuration>  

Requests to the root domain do not redirect. Why isn't it working?

GCP - "kubectl rollout restart" results: error: unknown command "restart"

Posted: 01 Apr 2021 10:21 PM PDT

My GCP kubernetes cluster version is: Master version 1.15.7-gke.2

When I run in the cloud shell kubectl rollout restart

I am getting the error: unknown command "restart"

What might be the reason?

update
kubectl version resulted: v1.15.7-gke.2
kubectl version --client resulted: v1.15.7

fatnj@cloudshell:~ (pop)$ kubectl version --client  Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:42:56Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}  fatnj@cloudshell:~ (pop)$ kubectl rollout restart  error: required resource not specified  

Thanks

TLS 1.2 only on Windows Server with RD Services breaks RDP

Posted: 01 Apr 2021 09:05 PM PDT

I've been experiencing an RDP issue whenever TLS 1.0 is disabled in my environment. I've seen many others report the same issues across the web.

In November 2018, Microsoft released a patch for Server 2012 R2 that fixed a silent bug wherein FIPS policy would silently re-enable TLS1.0/1.1 support.

A Server 2012R2 or 2016 server running Remote Desktop Services will fail to allow non-console connections when TLS 1.0/1.1 is turned off.

The above linked article proposes:

a. Not using RDS with a Connection Broker, which breaks our use case

b. Not disabling TLS 1.0, which breaks our security posture

c. Configure a HA Connection Broker on a dedicated SQL server, which seems like a large effort with additional cost we'd prefer to avoid.

Has anyone else resolved this issue any other way?

Or, is it possible to set up a HA connection broker without actually having a second RDS Server?

We could place the SQL connection on a server that already exists in the environment in that case.

How to create my own simple autoresponder for Postfix with custom conditions?

Posted: 01 Apr 2021 04:07 PM PDT

Postfix/Dovecot

I want to create my simple custom autoresponder for Postfix, in Python, for instance. I don't need any third-party ready-to-user one.

I want it to have some custom conditions such as "autoreply only when 'from == A' or/and 'to == B' or/and "there's have been no autoprely to that email today', etc...

I've found out that I'll need to use either content_filter or spawn in master.cf. Not milter, because milter is triggered because a message has been put into a queue, whereas I'll need to autoreply to messages that already have come through. Probably.

Other better options?

How to do implement that?

Securely add a host (e.g. GitHub) to the SSH known_hosts file

Posted: 01 Apr 2021 04:13 PM PDT

How can I add a host key to the SSH known_hosts file securely?

I'm setting up a development machine, and I want to (e.g.) prevent git from prompting when I clone a repository from github.com using SSH.

I know that I can use StrictHostKeyChecking=no (e.g. this answer), but that's not secure.

So far, I've found...

  1. GitHub publishes their SSH key fingerprints at https://help.github.com/articles/github-s-ssh-key-fingerprints/

  2. I can use ssh-keyscan to get the host key for github.com.

How do I combine these facts? Given a prepopulated list of fingerprints, how do I verify that the output of ssh-keyscan can be added to the known_hosts file?


I guess I'm asking the following:

How do I get the fingerprint for a key returned by ssh-keyscan?

Let's assume that I've already been MITM-ed for SSH, but that I can trust the GitHub HTTPS page (because it has a valid certificate chain).

That means that I've got some (suspect) SSH host keys (from ssh-keyscan) and some (trusted) key fingerprints. How do I verify one against the other?


Related: how do I hash the host portion of the output from ssh-keyscan? Or can I mix hashed/unhashed hosts in known_hosts?

Is there any point in having proxy config instead a match-all Location?

Posted: 01 Apr 2021 05:03 PM PDT

I have inherited a system and there are a few things that no one knows why there are how there are anymore.

In the httpd configuration, I've come across a few occurrences of Location directives that match all paths:

<Location />      ProxyPass http://localhost:4500/ retry=1 acquire=3000 timeout=600 Keepalive=On      ProxyPassReverse http://localhost:4500/  </Location>  

Isn't the above just equivalent to not having the Location directive?

    ProxyPass "/" http://localhost:4500/ retry=1 acquire=3000 timeout=600 Keepalive=On      ProxyPassReverse "/" http://localhost:4500/  

Is there any advantage of the first over the second?

Using Let's Encrypt certs on LAN with DNS redirection?

Posted: 01 Apr 2021 09:05 PM PDT

I'm trying to use existing LE certs with a server on my LAN. I exposed port 443 to get the certs for mine.example.com and https access works fine from the WAN.

However, I assumed (perhaps foolishly) that I might be able to use the same certs internally by setting up DNS redirection (using dnsmasq on a separate box) on my LAN to point mine.example.com to the local IP.

Redirection works fine and points local machines to the internal IP when I go to mine.example.com but the certs now show 'Certificate Authority Invalid' errors.

Perhaps I misunderstand how the CA process works but I assumed that, since LE certs are DNS based, they should still work with local DNS redirection.

Does anyone know how to make this work?

Or can anyone explain why it doesn't work?


I know I can get different certs for local machines from LE but that would mean trying to configure the server to use different certs for internal and external access. Assuming I need to do this, is there an easy way to use different certs depending on source traffic?

I'll be serving web content through nginx and also a Webmin admin panel so it may be relatively easy to do for nginx given the flexibility in the configs (although google hasn't been too helpful here either) but not sure about other web services running on the machine?


P.S. sorry if this turns out to be a duplicate but couldn't find anything with a lot of searching here (or on the Googles).

Powershell find orphaned processes

Posted: 01 Apr 2021 10:03 PM PDT

I am looking for a way to find processes that do not have a parent process running (orphaned processes). Im attempting to do this using win32_process. I have the query that returns the attributes needed, its the comparison im struggling with:

gwmi win32_process -ComputerName $hostname | select ProcessID,ParentProcessID,@{l="Username";e={$_.getowner().user}} | where{$_.Username -like $username}  

I have tried compare-object -includeequal against the two arrays and get an overwhelming number of results- so much so i doubt truth of the operator given the arrays i'm feeding it. I think there is value added in the diff command, but am not familiar with the usage other than feeding it arrays as well. Does anyone have experience with the diff command and/or another solution?

The end goal is to compare or diff the two arrays from the above wmi call:

$proc_all = gwmi win32_process -ComputerName $hostname | select ProcessID,ParentProcessID,@{l="Username";e={$_.getowner().user}} | where{$_.Username -like $username}  $sub_procs = $proc_all.Processid #ARRAY1  $par_proces = $proc_all.ParentProcessId #ARRAY2  

And then return only the ones that do not appear in both (orphaned). Thanks in advance!

How can I get Laravel app routing to work in a sub-folder of a WordPress site?

Posted: 01 Apr 2021 06:00 PM PDT

I've got an existing WordPress site and I need to get a Laravel app to work in a sub-folder called 'api'. This is an nginx site, so .htaccess redirects will not work, and the best solution if it needs a redirect would be a PHP solution as I'm not sure I'll be able to access the nginx config directly on this particular server. I'm able to access the index.php file in the /public/ folder of the Laravel app, but going to /api/route/ takes me to a WordPress 404 page. I tried doing redirects in nginx config and PHP but nothing seems to be working. Is there something specific I need to do for putting a Laravel app in a sub-folder? I've inherited the project from another person and it is currently working where it is but it needs to be moved to a new server.

My routes look like this:

Route::group(array('before' => 'api_auth'), function()  {      Route::get('/', 'Home\HomeController@index');      Route::resource('cusomter', 'Customer\CustomerController', array('only' => array('show', 'store')));      Route::resource('customer.conversion', 'Customer\CustomerController', array('only' => array('index')));      Route::resource('customer.search', 'Customer\CustomerController', array('only' => array('index')));    });  

OwnCloud and Azure Active Directory integration

Posted: 01 Apr 2021 07:05 PM PDT

Is it possible to integrate ownCloud (https://owncloud.org) with Azure Active Directory for auth?

nginx 405's with try_files for a DELETE request instead of proxying

Posted: 01 Apr 2021 10:03 PM PDT

I have nginx proxying to php-fpm with the following config:

location / {    try_files $uri $uri/ /index.php?$args;  }  location ~ \.php$ {    fastcgi_pass   127.0.0.1:9000;    fastcgi_index  index.php;    fastcgi_param  SCRIPT_FILENAME /vol/app/www/$fastcgi_script_name;    include        fastcgi_params;  }  

```

Everything is working great until a DELETE request comes in like:

DELETE /?file&path=foo

When this happens nginx returns a 405 (method not allowed) and doesn't appear to proxy the request to php-fpm. What's the best way to get DELETE/PUT requests to proxy? Is there way to bypass try_files for this type of request?

When hitting this URL, I see nothing in the error.log but access.log shows:

68.50.105.169 - - [20/Mar/2016:17:48:57 +0000] "DELETE /?file=client_img1.png&fileupload=e35485990e HTTP/1.1" 405 574 "http://ec2-foo.compute.amazonaws.com/jobs/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36" "-"

I've confirmed that I'm not hitting the proxy. My assumption is that nginx is blocking DELETE on the first "try" of try_files

lighttpd: remove charset=UTF-8 from content type

Posted: 01 Apr 2021 04:07 PM PDT

lighttpd 1.4.31-4+deb7u3 automatically adds ;charset=UTF-8 to the content-type of .html and .php files.

How can I remove that?


Setting the content type in PHP itself does not help; lighttpd still adds the charset parameter - as soon as the mime type begins with text/.

Removing

 include_shell "/usr/share/lighttpd/create-mime.assign.pl"  

from my config does also not help.

bash rsync is Killed by signal 2

Posted: 01 Apr 2021 08:02 PM PDT

I'm trying to prevent the user from cancelling the script by using ctrl + c. The following script executes completely, except rsync that insists on dying, displaying the error Killed by signal 2.

Is it possible to avoid rsync from dying? If so, can I put it in the background, or should it be in the foreground?

script:

trap '' SIGINT SIGTERM SIGQUIT    cd /tmp  nohup rsync  -e 'ssh -o LogLevel=ERROR' -av --timeout=10 --delete-excluded myapp.war myserver:/tmp/  < /dev/null > /tmp/teste 2> /tmp/teste2    let index=0  while [ $index -lt 400000 ]  do    let index=index+1  done    echo "script finished"  echo "index:$index"  

I'm suspecting that the ssh channel is dying before rsync. Following the end of the output of the strace command in pid of rsync:

[...]  write(4, "\374\17\0\7", 4)              = 4  select(5, NULL, [4], [4], {10, 0})      = 1 (out [4], left {9, 999998})  --- SIGINT (Interrupt) @ 0 (0) ---  --- SIGCHLD (Child exited) @ 0 (0) ---  wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 255}], WNOHANG, NULL) = 12738  wait4(-1, 0x7fffaea6a85c, WNOHANG, NULL) = -1 ECHILD (No child processes)  rt_sigreturn(0xffffffffffffffff)        = 0  select(0, NULL, NULL, NULL, {0, 400000}) = 0 (Timeout)  rt_sigaction(SIGUSR1, {SIG_IGN, [], SA_RESTORER, 0x3fcb6326b0}, NULL, 8) = 0  rt_sigaction(SIGUSR2, {SIG_IGN, [], SA_RESTORER, 0x3fcb6326b0}, NULL, 8) = 0  wait4(12738, 0x7fffaea6aa7c, WNOHANG, NULL) = -1 ECHILD (No child processes)  getpid()                                = 12737  kill(12738, SIGUSR1)                    = -1 ESRCH (No such process)  write(2, "rsync error: unexplained error ("..., 72) = 72  write(2, "\n", 1)                       = 1  exit_group(255)                         = ?  Process 12737 detached  

rsync over active ssh connection

Posted: 01 Apr 2021 08:02 PM PDT

Trying to script as clean as possible, I wonder if there is some solution for the following situation:

One Linux server running sshd and one android device, with dropbear ssh client and rsync installed (no server).

I'm writing a script to be run remotely with a cron that backups the android memory to the linux server. The cron calls something like:

ssh remoteuser@linuxserver -i path_to_rsa_key runthisscript.sh  

runthisscript.sh performs a few things with the existent data, and, what I want to do, in the middle of the script, is to rsync from the android device back to the server, taking advantage of the ssh connection that is already opened (as there is no sshd running on the android).

I've developed other solutions, like breaking my server script in several parts and calling them one after another, with the rsync (android to server direction) in the middle, but I was looking for a more elegantly implemented solution (single script, most of the work done in the server side).

Ideas?

strftime returning time for wrong timezone

Posted: 01 Apr 2021 07:05 PM PDT

I'm trying to get "dts" to echo the current local time in vim (cygwin under Windows 7) byt the output is GMT+1 instead of the localtime (GMT-7). My abbreviation works fine on other machines, but I can't get it to work on this one.

The abbreviation is:

dts <expr> strftime("%m.%d.%Y %H:%M:%S")  Result: 3/27/2012 9:53:03 PM  

From the cygwin command line, TZ is set to America/Los_Angeles and the date command outputs the correct time. It's only when I try using strftime() under gvim that the timestamp is wrong. I tried adding %z to see the GMT offeset but the results are even more baffling:

:iab qwe strftime("%c (%z)")  Result: 3/27/2012 9:53:03 PM (ric)  

I have been unable to figure out what "ric" means.

Configure a Local DNS Resolver That Only Caches for a Short Period

Posted: 01 Apr 2021 06:18 PM PDT

I am working on an application that will be used to verify new domains are configured correctly as they're set up for hosting. Part of this checks the validity of SPF, DomainKey, DKIM records, etc.

I currently use a default TTL of one hour for most of these records. Occasionally a mistake is found in one of the records so it needs to be updated. Currently, if I've just tested the domain I have to wait for the system's resolver's cached record to expire before I can verify it is correct with my application. (Yes, I can check manually but I wrote the application so I don't have to).

I would like to set up a DNS server on the system to act as a normal caching resolver except that it will expire records in a maximum of a set time such as five minutes or just not cache at all. Not all of the domains have DNS hosted on my normal name servers so this system would have to query the authoritative name servers for a domain rather that use upstream resolvers (which would just use their cached records).

This machine is not currently running DNS of any kind so I can install BIND or djbdns (or something else if there's a good suggestion.

No comments:

Post a Comment