Sunday, July 10, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Idea for better performance in development environment Web Logic remote?

Posted: 10 Jul 2022 07:56 PM PDT

I would like to ask for help and guidance, about the problem and the idea I had, open to suggestions.

I'm working on a project where the application is in weblogic and there are several modules.

In the dev team, everyone has ecplise and weblogic localhost on their machine to run the application and carry out maintenance or improvement in the code.

The problem is that in some modules, because they are more complex and very large, we have a very long delay for each new build, sometimes around 30 min, maybe for more experienced developers who are already familiar with the code, it is simpler,

More in my case as just a junior developer we often make changes to then test the change in the application and from republish to republish if it goes a few hours a day.

In this case, I thought if you could not migrate the local environment from weblogic to TOMCAT, however, as an application has 12 modules, taking all this to TOMCAT was very unfeasible because there was a need for one more running.

So as we have a certain resource available on azure I had the idea of ​​running a container in docker with weblogic and performing the remote debug on weblogic and publish too, so the performance would be better and on the local machine I would run ecplise.

This idea of ​​running weblogic via docker and leaving ecplise only to connect to weblogic would it be possible to really perform republish and debug with quality?.

Ecplise: Photon WebLogic: 12.2.1.4.0 Java:7

Justifying Horizontal Scaling

Posted: 10 Jul 2022 07:11 PM PDT

When is horizontal scaling likely to solve your scaling problems?

Let's say you have single api node (no DB) and a desired goal of 10k RPS over 5 minutes where the p95 is < x ms. Requests are coming in and you start to see that p95 go above your x goal. If you don't see any clear metrics indicating poor application performance (>75% CPU, > 75% RAM, etc), is it safe to assume horizontal scaling is likely the solution?

At first I thought the answer was "yes", but then I saw this article. Vertically scaling a node application from a large to a xlarge AWS instance allowed it to go from 10k RPS to 25K RPS. How is that possible? CPU Utilization on the 10k test was around 10% (not that high). It's possible its memory but seems unlikely. Am I missing something? Or is horizontal scaling just cheaper than vertical scaling with the additional benefit of resiliency?

NGINX is not respecting DNS TTL of Upstream Server

Posted: 10 Jul 2022 06:19 PM PDT

I have an NGINX TCP load balancer with the following configuration:

user myusername;  worker_processes auto;  error_log /var/log/nginx/error.log;  pid /run/nginx.pid;    include /usr/share/nginx/modules/*.conf;  load_module /usr/lib/nginx/modules/ngx_stream_module.so;    events {      worker_connections 1024;  }    stream {        upstream api_backend_http {          server myserver1.mydomain.com:80;          server myserver2.mydomain.com:80;      }        upstream api_backend_https {          server myserver1.mydomain.com:443;          server myserver2.mydomain.com:443;      }        server {          listen            80;          proxy_pass        api_backend_http;          proxy_buffer_size 16k;          proxy_connect_timeout 1s;      }        server {          listen            443;          proxy_pass        api_backend_https;          proxy_buffer_size 16k;          proxy_connect_timeout 1s;      }          }  

The DNS TTL of myserver1.mydomain.com is set to 30 seconds. 45 Minutes after changing this, NGINX is still sending traffic to the old IP address.

This shouldn't happen - ideally it should respect the TTL of the upstream server DNS name. But it doesn't seem to be doing that. Does anyone know what the actual TTL is, and how to change it?

Side note, this feels like a bug in NGINX.

Creating Zip Using Foreach in PS Produces Nothing

Posted: 10 Jul 2022 06:08 PM PDT

I have a little use case where I need to zip each directory within a directory (the subdirs are flat, so no worry about recursion) and attempting to do so with a simple one liner in PS has produced nothing I can even debug with. That said, I'm very green when it comes to Powershell.

Here's what I'm using currently to no avail:

$directories = Get-ChildItem -Path . -Directory  foreach($directory in $directories) { Compress-Archive -Path $directory.FullName -DestinationPath "$($directory.FullName).zip" }  

The console briefly flashes suggesting the command to zip ran, but there is no output - I'm just trying to write to the same directory where the target dirs are located. I've sanity checked to see if I'm using the right attribute on the Object, and the foreach loop for that produces what I'd expect:

$directories = Get-ChildItem -Path . -Directory  foreach($directory in $directories) { echo $directory.FullName }  

As another sanity test, I created a series of dummy dirs with a single small file within that follow a simple scheme incrementing a suffixed integer. ex.: test1, test2, etc. and did a similar test but with a for loop:

for($num = 1; $num -le 3; $num++) { Compress-Archive ".\test$num\" "test$num.zip" }  

Which does correctly produce zips from the test dirs. I'm baffled as to why foreach looping over the result of get the Get-ChildItem call isn't well-received by Compress-Archive when I can verify it does iterate over each directory.

How does warming up email, or email domain, technically work?

Posted: 10 Jul 2022 08:06 PM PDT

Whenever a new email, more specifically a new domain from which emails will be sent, is getting warmed up, what is it what's actually getting warmed up? On technical level.

Is it one's ISP (!) who's getting a signal that such and such domain is gradually ramping up the amount of emails it sends?

Notice that I haven't specified that a new domain is sending emails necessarily to gmail, outlook or yahoo. Nor have I said that a new domain is sending emails via these 3 big.

Webserver and permissions - two users having rights to modify specific (but not all) files

Posted: 10 Jul 2022 03:46 PM PDT

So,the typical situation is like that: webserver (in this case nginx) works under the www-data user.

And then there is also 'konrad' user, which is just an ordinary user.

And now, the whole website (/var/www/html/cool-site) has the owner: konrad, and group: www-data.

Files are mostly 750.

And that is fine (I guess). But... now I have a situation, where another user comes in. Lets call him 'mike'. And now, what I want to achieve, is that he wants to be able to modify files owned by me, and I want to be able to modify files owned by him.

Or, better yet - I, as an admin, would like to decide, for every directory(?), file(?) that only I (konrad) or only him (mike) or both of us, can do the changes.

Obviously, we both should have the right to view the files and browse the directories.

What I was thinking about is this: create yet another group, like 'common'. Add 'www-data' user to this group. Add both of us (konrad and mike) to this group. And whenever I (or mike) decide that we both should have write access to this dir/file, we would chown it to this group, and the permissions would allow writing there. Then I realized that in this scenario, www-data would have write access to these directories/files.

So I'm stuck. I believe there is a solution, but I can't think of anything :)

exim4 authentication with external smtp server for smarthost

Posted: 10 Jul 2022 03:21 PM PDT

/etc/exim4/update-exim4.conf.conf

dc_eximconfig_configtype='smarthost' # was local  dc_other_hostnames='' # was mini31  dc_local_interfaces='127.0.0.1 ; ::1'  dc_readhost='mini31'  dc_relay_domains=''  dc_minimaldns='false'  dc_relay_nets=''  dc_smarthost='send.one.com:465' # Yes, two colons.  CFILEMODE='644'  dc_use_split_config='false'  dc_hide_mailname='true'  dc_mailname_in_oh='true'  dc_localdelivery='mail_spool'  

/etc/exim/passwd.client

target:send.one.com:my-real-address@something.com:MyTopSecretPassword  

/etc/email-addresses

localusername:my-real-address@something.com  

/var/log/exim4/mainlog

2022-07-10 23:03:49 1oAf1p-0004w0-GD <= my-real-address@something.com U=rwb P=local S=355  2022-07-10 23:03:49 1oAf1p-0004w0-GD H=send.one.com [2a02:2350:5:20e::2] Network is unreachable  2022-07-10 23:03:49 1oAf1p-0004w0-GD H=send.one.com [2a02:2350:5:20e::1] Network is unreachable  

or sometimes

mini31 # tail /var/log/exim4/mainlog  2022-07-10 23:05:28 1oAf1p-0004w0-GD Spool file is locked (another process is handling this message)  2022-07-10 23:05:28 End queue run: pid=19276  2022-07-10 23:08:49 1oAf1p-0004w0-GD H=send.one.com [46.30.211.141]: SMTP timeout after initial connection: Connection timed out  2022-07-10 23:08:49 1oAf1p-0004w0-GD == my-real-address@something.com R=smarthost T=remote_smtp_smarthost defer (110): Connection timed out H=send.one.com [46.30.211.141]: SMTP timeout after initial connection  2022-07-10 23:10:58 exim 4.92 daemon started: pid=19694, -q30m, listening for SMTP on [127.0.0.1]:25 [::1]:25  2022-07-10 23:10:58 Start queue run: pid=19695  2022-07-10 23:10:58 1oAf1p-0004w0-GD == my-real-address@something.com R=smarthost T=remote_smtp_smarthost defer (-53): retry time not reached for any host for 'rwb.me.uk'  2022-07-10 23:10:58 End queue run: pid=19695  2022-07-10 23:11:26 1oAf9C-00057u-P5 <= root@mini31 U=root P=local S=330  2022-07-10 23:11:26 1oAf9C-00057u-P5 == my-real-address@something.com R=smarthost T=remote_smtp_smarthost defer (-53): retry time not reached for any host for 'rwb.me.uk'  

** Mail delivery failed: returning message to sender**

...    my-real-address@something.com      host send.one.com [46.30.211.141]      SMTP error from remote mail server after pipelined end of data:      530 5.7.0 Authentication required  ...  
mini31 # mailq  12m   355 1oAf1p-0004w0-GD <my-real-address@something.com> (rwb)            my-real-address@something.com     4m   330 1oAf9C-00057u-P5 <root@mini31>            my-real-address@something.com  

Question

WTF is going on and why isn't it working?

One.com seem to say port 465 and SSL/TLS. Do I need to enter SSL/TLS into exim4 somehow? I'm pretty sure they need the from address to be my-real-address@something.com -- do I need to set that somehwere?

Do I need to remove junk from mail1q?

When an authoritative server is found in the NS record, is the A record checked for the ip address or not?

Posted: 10 Jul 2022 03:02 PM PDT

I am trying to understand what NS records are, how glue records form part of it and what happens afterwards? As far as I understand the NS record contains the hostname/s of the authoritative nameserver or those which might hold info on where to find it. Assuming no caching:

1.) How does the recursor know if the NS record points to the authoritative nameserver or directs the recursor to another nameserver? Or are both things the same? I mean the entries in the NS record are just 'best effort' nameservers which might have the hostname of the authorithative nameserver or might point to nameservers which may or may not have information on the ip address of the authorithative nameserver?

2.) Once the recursor finds an entry in the NS record in a particular nameserver (e.g. TLD nameserver), does it check the A record to find the corresponding ip address or does it repeat the whole DNS process again (querying the root nameserver, then TLD nameserver, etc.)?

3.) How does "glueing" exactly work, I'm aware it's related to avoid recursive queries but are the glueing ip addresses found in the A record? In case of no glueing, does it mean that for 2.) the whole DNS process starts all over again for that hostname (coming from the NS record I guess)?

4.) In Cloudfare's explanation for the NS record, the example includes an @ and I've seen it too in this example Zone DNS, Example Zone File , what does the @ or leaving that column blank actually mean?

I'd really appreciate it if someone can help me out understanding this since I'm having a hard time trying to figure it out. Thank you in advance.

Are Western Digital SSDs compatible with QNAP TS-469U-RP NAS?

Posted: 10 Jul 2022 03:19 PM PDT

I just purchased a QNAP TS-469U-RP NAS. It is my understanding that just about any SSD will be compatible to work in a NAS, as long as the make and model of all four drives are identical. That being said, QNAP has a web-page listing all hard drives, which have been tested with the TS-469U-RP and are known to be compatible, as shown in the following screenshot.

QNAP's "recommended" list of SSDs

I want to purchase four Western Digital 1TB Green SSD (model # WDS100T3G0A), simply because I have used them in the past and found WD drives to be reliable. Is there any reason why I should not use the Western Digital SSDs just mentioned in my QNAP TS-469U-RP? If so, why?

P.S. I do see that I can submit a request to QNAP asking them to test the Western Digital drives, but that will likely take time and I would like to get this done ASAP.

Cross-sign third party DV cert with our own CA for high trust

Posted: 10 Jul 2022 03:00 PM PDT

I am looking to expand trust within our application by setting up mutual TLS between the customer service and our service. I am trying to wrap my head around this stuff as I am kinda new to this tech so would like to confirm my approach.

I am thinking of asking the customer for their Domain-validated certificate. I will then cross-sign it with our own intermediate CA (AWS private CA) and generate a leaf certificate which they will use for requests.

On the handshake with our server I want validate that they are a company/domain allowed to interact with our services (validate their DV cert). Also, since I cross sign with our CA I can revoke their access if needed. So basically I validate those two things.

Is this best practice for this sort of thing? Will the customer need to provide me with a new certificate every year when it expires? Will I have any problems cross signing their DV cert with my intermediate CA?

Extra information:

I want there to be a real-time set-up of a trusted encrypted session. So I want the client (which will be the customer server) to send a certificate (which we provide) to our service.

I'm trying to build a trust network in which I can onboard new users and ensure they are trusted entities (hence the DV cert part)

Maybe I don't have my own private CA and, instead, use a commercial CA instead.

how to fix tls ssl vulnerabilities in windows server?

Posted: 10 Jul 2022 02:56 PM PDT

Currently on our windows server (Windows Server 2016), we have following cipher suites installed:-

TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384  TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256  TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384  TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256  TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA  TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA  TLS_RSA_WITH_AES_256_GCM_SHA384  TLS_RSA_WITH_AES_128_GCM_SHA256  TLS_RSA_WITH_AES_256_CBC_SHA256  TLS_RSA_WITH_AES_128_CBC_SHA256  

Still the following security vulnerabilities are reported for our server as

  1. TLS/SSL Birthday attacks on 64-bit block ciphers (SWEET32)
  2. TLS/SSL Server Supports 3DES Cipher Suite <-- However there are no 3DES ciphers as listed above
  3. TLS/SSL Server Supports The Use of Static Key Ciphers

I am using tomcat 9.0.62. How can I fix these security vulnerabilities.

Nginx Log to access log or error log only if Http Status code = 401

Posted: 10 Jul 2022 12:55 PM PDT

I am working on a nginx configuration, where we are proxy passing the request to one of our servers. We are getting a lot of 401's causing our Pod to die. We want to log those IP's somehow to a file, so we can dynamically ban those IP's. I am currently stuck at how to configure Nginx to log IP's of requests with status Code 401. Any ideas?

log_format json escape=json '{"@timestamp": "$time_iso8601", '                              '"remote_addr": "$remote_addr", '                              '"body_bytes_sent": $body_bytes_sent, '                              '"status": $status, '                              '"request": "$request", '                              '"request_id": "$request_id", '                              '"request_method": "$request_method", '                              '"request_time": $request_time, '                              '"http_referrer": "$http_referer", '                              '"http_user_agent": "$http_user_agent"}';    access_log off;  location /v1/events {      set $contentType $http_content_type;        if ($http_content_type ~* "^text/plain") {          set $contentType "application/json";      }        proxy_set_header Content-Type $contentType;      proxy_ssl_session_reuse off;      proxy_ssl_server_name on;      proxy_intercept_errors on;      proxy_pass OUR_DOMAIN;  }    location / {      proxy_ssl_session_reuse off;      proxy_ssl_server_name on;      proxy_intercept_errors on;      proxy_pass OUR_DOMAIN;  }  

Email forwarding from exim on debian

Posted: 10 Jul 2022 03:43 PM PDT

Do it looks like Debian (10) out of the box comes with exim?

mini31 # apt list --installed  | grep exim    WARNING: apt does not have a stable CLI interface. Use with caution in scripts.    exim4-base/oldstable,oldstable,now 4.92-8+deb10u6 amd64 [installed,automatic]  exim4-config/oldstable,oldstable,now 4.92-8+deb10u6 all [installed,automatic]  exim4-daemon-light/oldstable,oldstable,now 4.92-8+deb10u6 amd64 [installed,automatic]  mini31 # apt list --installed  | grep postfix    WARNING: apt does not have a stable CLI interface. Use with caution in scripts.    mini31 #  

Am I correct that what I need to get it to send e-mails outside to a real e-mail address is called e-mail forwarding? (Or is it called smarthost?)

I append

root: my-real-email-address@example.com  

to /etc/aliases, right?

So instead of using mail the messages will go to my real e-mail address?

I imagine that I have to type in my smtp details somewhere? Any clues where that might be?

gRPC via CloudFlare results in HTTP/2 internal error code 2

Posted: 10 Jul 2022 08:14 PM PDT

My setup:

.Net gRPC Server <-> Nginx <-> CloudFlare <-> gRPC client (C#/Python)  

My .Net gRPC Server configured to support insecured http2, listen at port 50052:

webBuilder.UseStartup<StartupGrpc>().UseUrls($"http://*:50052");  webBuilder.ConfigureKestrel(serverOptions => { serverOptions.ConfigureEndpointDefaults(listenOptions => { listenOptions.Protocols = HttpProtocols.Http2; }); });  

Nginx is set to grpc_pass as follow:

server {      server_name grpc.mydomain.com;      listen      443 ssl http2;      ssl_certificate    /etc/nginx/cf_origin_ssl/mydomain.pem;      ssl_certificate_key /etc/nginx/cf_origin_ssl/mydomain.key;        proxy_cache off;         location / {          grpc_pass grpc://localhost:50052;         }  }    server {      server_name mydomain.com;      listen      443 ssl http2;      ssl_certificate    /etc/nginx/cf_origin_ssl/mydomain.pem;      ssl_certificate_key /etc/nginx/cf_origin_ssl/mydomain.key;        proxy_cache off;         location / {          proxy_pass localhost:50051;         }  }  

CloudFlare: Network/gRPC -> On, SSL/TLS -> Full(strict) (with Origin Certificates generated by CloudFlare). I tested, and my web server at mydomain.com worked fine. However, gRPC calls from .Net/C# gRPC Client returns:

Unhandled exception. Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="Error starting gRPC call. IOException: The request was aborted. Http2StreamException: The HTTP/2 server reset the stream. HTTP/2 error code 'INTERNAL_ERROR' (0x2).", DebugException="System.IO.IOException: The request was aborted.   ---> System.Net.Http.Http2StreamException: The HTTP/2 server reset the stream. HTTP/2 error code 'INTERNAL_ERROR' (0x2).      --- End of inner exception stack trace ---      at System.Net.Http.Http2Connection.ThrowRequestAborted(Exception innerException)      at System.Net.Http.Http2Connection.Http2Stream.CheckResponseBodyState()      at System.Net.Http.Http2Connection.Http2Stream.TryReadFromBuffer(Span`1 buffer, Boolean partOfSyncRead)      at System.Net.Http.Http2Connection.Http2Stream.ReadDataAsync(Memory`1 buffer, HttpResponseMessage responseMessage, CancellationToken cancellationToken)      at Grpc.Net.Client.StreamExtensions.ReadMessageAsync[TResponse](Stream responseStream, GrpcCall call, Func`2 deserializer, String grpcEncoding, Boolean singleMessage, CancellationToken cancellationToken)      at Grpc.Net.Client.Internal.GrpcCall`2.RunCall(HttpRequestMessage request, Nullable`1 timeout)")  

I also tried to make gRPC calls from Python, and got a similar error:

Traceback (most recent call last):      ...      File "/home/user/miniconda/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__      return _end_unary_response_blocking(state, call, False, None)      File "/home/user/miniconda/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking      raise _InactiveRpcError(state)      grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:      status = StatusCode.UNAVAILABLE      details = "failed to connect to all addresses"      debug_error_string = "{"created":"@1634609018.116476058","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3158,"referenced_errors":[{"created":"@1634609018.116472621","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":147,"grpc_status":14}]}"      >    During handling of the above exception, another exception occurred:    Traceback (most recent call last):      ...      File "/home/user/miniconda/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__      return _end_unary_response_blocking(state, call, False, None)      File "/home/user/miniconda/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking      raise _InactiveRpcError(state)      grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:      status = StatusCode.INTERNAL      details = "Received RST_STREAM with error code 2"      debug_error_string = "{"created":"@1634609018.553728473","description":"Error received from peer ipv4:172.67.179.119:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Received RST_STREAM with error code 2","grpc_status":13}"      >  

In both cases, the gRPC requests did get through CloudFlare, Nginx and reach my gRPC server (The remote procedures got executed). Nginx logs also reported with 200 success code:

116.110.42.123 - - [19/Oct/2021:01:25:22 +0000] "POST /greet.Greeter/CsharpSayHello HTTP/2.0" 200 64 "-" "grpc-dotnet/2.40.0.0" "116.110.42.123" "grpc.mydomain.com" sn="grpc.mydomain.com" rt=0.002 ua="127.0.0.1:50052" us="200" ut="0.000" ul="71" cs=-  116.110.42.123 - - [19/Oct/2021:01:27:57 +0000] "POST /greet.Greeter/CsharpSayHello HTTP/2.0" 200 68 "-" "grpc-python/1.41.0 grpc-c/19.0.0 (linux; chttp2)" "116.110.42.123" "grpc.mydomain.com" sn="grpc.mydomain.com" rt=0.001 ua="127.0.0.1:50052" us="200" ut="0.000" ul="75" cs=-  

I googled a lot about CloudFlare gGRPC and Nginx, but couldn't figure out what is wrong.

'quiet splash' breaks default Ubuntu 20.04 boot on Server version, not on Desktop

Posted: 10 Jul 2022 03:02 PM PDT

I'm aware that the internet is packed with 'quiet splash' kernel config issues regarding the boot process on several hardware sets, which generally leads to graphics issues that can be prevented with 'nomodeset' or similar. This is not one of them.

On a fresh 20.04.1 Server installation (no additional packages installed, absolute installer-default minimum set), just adding quiet splash to the GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub (which is empty by default), breaks the boot process. Splash is shown and hangs there forever. No console login possible. Adding nomodesethas no effect. Same thing on an Intel NUC w/ UEFI and within a Parallels VM.

When using the minimal set of of the corresponding Desktop image, quiet splash is the default cmdline and splash works fine.

There are several questions I can't find an answer yet:

  1. What are the differences here? Shouldn't be a driver issue as only Kernel drivers are used, no proprietary sets. And AFAIK Ubuntu uses the same kernel config for desktop and sever since 12.04 or something.. Any hints which configurations to check for differences?
  2. Tips on how to debug this? As the issue only occurs with quiet splash set, I'm not able to see any logs from boot. I guess I could mount the partition with another system after a failed start and inspect log files, but is there a way that doesn't involve a second (or live) system?

Thanks!

Save or capture Screen output to file after it has written to stdout?

Posted: 10 Jul 2022 02:06 PM PDT

I've run a script in a screen session but I forgot to redirect stdout to a file. There's about 10MB worth of text. If there's even some way to highlight the text and copy paste I would, but Ctrl-A + Esc won't scroll my terminal view when I click and drag the mouse. I'm using bash on Ubuntu 18. Is there anything I can try?

Reclaim free space after deleting NFS file

Posted: 10 Jul 2022 05:04 PM PDT

I deleted a 137G file in a NFS mount (from a Linux host), and it dissapeared from the directory but the free space reported by df is still the same

  • the NFS server is a NAS device with almost no logging information, but at least it shows the free space, which is the same as reported by df
  • the file is not open - it's unused for a long time, it doesn't show in lsof (I also have rebooted the NAS device)
  • the difference between the used space from df -h and du -hs . on the full disk is exactly 137G
  • the NFS share is mounted with soft,user options

What could be causing this?

Script set up via update-rc.d not being called on startup

Posted: 10 Jul 2022 04:02 PM PDT

A script should be called on startup. I did what the internet told me, but it does not get called on startup.

Script in /etc/init.d/my_script simply uses touch to create files to see if it works:

#!/bin/bash  set -x    touch /home/db/called > /home/db/rc.log 2>&1    case "$1" in    start)      touch /home/db/started > /home/db/rc.log 2>&1      ;;    stop)      touch /home/db/stopped > /home/db/rc.log 2>&1      ;;    *)      echo "Usage: /etc/init.d/test {start|stop}"      exit 1      ;;  esac  

The internet says that I simply have to run sudo update-rc.d my_script defaults to register it.

When run via service my_script start it works fine. The files are created as expected. But after reboot no files appear. So I assume the script isn't called at all, the log file rc.log also doesn't appear. When looking through the rc*.d folders (via find /etc/rc* -name "*my_script*") there also doesn't seem to be a link to the script.

Maybe some other helpful info:

  • Ubuntu 18.04 Server
  • Script file in /etc/init.d is owned by root like every file there

Thanks for any help!

NGINX replace string in $args

Posted: 10 Jul 2022 01:02 PM PDT

I would like to manipulate a parameter when the string /static/ exists on the src $arg_param in nginx.

    location ~ ^/customresize.php {          if($args_param4 ~ /static/){             #replace /static/ with /a/static/           }      }  

As you understand there are parameters before an after so i just need to replace this part. e.g

https://my.site.io/customresize.php?z=2&w=200&h=100&sec=https://my.site.io/static/img.png

And on the url above replace /static/ with /a/static/.

Thank you.

Azure cloud VM change letter of temporary drive

Posted: 10 Jul 2022 06:04 PM PDT

In azure cloud when you start a windows image the running vm will have a temporary drive D: where the page file is set.

Is any way I can call the api (powershel, az cli etc.) and be able to specify which letter to assign to the temporary drive ? I want for example to have the disk C: as OS and disk Z: for the temporary drive.

thanks,

ps: i know how to change it after the vm is running as per https://docs.microsoft.com/en-us/azure/virtual-machines/windows/change-drive-letter

azure active directory domain services unblock account

Posted: 10 Jul 2022 04:02 PM PDT

I am hoping that someone can help me unblock an account on an Azure VM.

The VM is domain joined and is running SQL Server 2014 on Windows Server 2016. I have an office 365 / Azure AD tennant with Azure active directory domain services.

I have an account that is locked because of greater than 5 attempts, but if I give it some time, it goes active again.

net user /DOMAIN trent  ...  Account active               Yes  Account expires              Never`    Password last set            5/26/2018 5:19:15 AM  Password expires             8/24/2018 5:19:15 AM  Password changeable          5/27/2018 5:19:15 AM  Password required            No  User may change password     Yes    Workstations allowed         All  Logon script  User profile  Home directory  Last logon                   5/26/2018 5:19:54 AM    Logon hours allowed          All    Local Group Memberships  Global Group memberships     *AdminAgents          *Domain Users                               *AAD DC Administrators*PWS Wordpress Site Ad  The command completed successfully.  

As soon as I go to login using RDP I am locked out.

I have looked up similar problems on like https://community.spiceworks.com/topic/2125626-remote-desktop-services-causing-ad-account-lock-out and tried the tool at https://www.netwrix.com/account_lockout_examiner.html but it doesn't seem to want to connect to the AAD DS.

I have checked that there are:

  • No mapped credentials
  • No old cached creds
  • No other applications
  • No scheduled tasks

I am not sure how to change the group policy to stop it happening, I can't install AD DS because it is on AAD DS.

Any help would be appreciated.

Windows: create a service for running a executable jar with out any external libraries

Posted: 10 Jul 2022 02:06 PM PDT

I have spring boot executable jar file which can run into any command prompt by calling java -jar filename.jar.

I want to create a service with out downloading any external libraries for the above code snippet.

Help me if there is a strait forward way.

How to configure Nginx to make Gitlist work correctly

Posted: 10 Jul 2022 08:07 PM PDT

Here the full story: To make team project more easy to manage, view and follow, I had to build a project to setup a complete repository to centralize all of our differents projects.

For this, I planned to put on a dedicated (VM) Debian server:

  • A CVS (git)
  • A web server (Nginx)
    • Containing a frontend for git (Gitlist)
    • A Bugtracker that can work with LDAP (mantis)
  • A MTA for giving the possibility to the bugtracker to send mails

Everything is more or less set and functional but Gitlist continue to give me somes difficulties and even if I have found some answers, none of them have worked until now, that is why I am here now.

Now the details of the problem:

my git repositories are in /home/git/repositories/ (set with chmod 744 for Gitlist to access it)

I can init (bare) projects, push and pull from them, etc..., everything seem ok for this part

Nginx is set to serve the content of /var/www/html/ and Gitlist is in the directory /var/www/html/depot/

The Gitlist config.ini have this content:

[git]  client = '/usr/bin/git' ; Your git executable path  default_branch = 'master' ; Default branch when HEAD is detached  repositories[] = '/home/git/repositories' ; Path to your repositories                                 ; If you wish to add more repositories, just add a new line    ; WINDOWS USERS  ;client = '"C:\Program Files (x86)\Git\bin\git.exe"' ; Your git executable path  ;repositories[] = 'C:\Path\to\Repos\' ; Path to your repositories    ; You can hide repositories from GitList, just copy this for each repository you want to hide  ; hidden[] = '/home/git/repositories/BetaTest'    [app]  debug = false  cache = true  theme = "default"    ; If you need to specify custom filetypes for certain extensions, do this here  [filetypes]  ; extension = type  ; dist = xml    ; If you need to set file types as binary or not, do this here  [binary_filetypes]  ; extension = true  ; svh = false  ; map = true    ; set the timezone  [date]  timezone = UTC  format = 'd/m/Y H:i:s'  

Here again, everything seem OK too, when I go to http://vm/depot/ I see the list of all the projects in the repository, but when I want to view the content of one, I always get a 404, I assume it's part of the url routing provided by the Silex framework that is used in Gitlist that don't go well with Nginx, but I can't figure how to make it work.

Finally here is my /etc/nginx/sites-enabled/default that I assume is the one in fault

server {      listen 80 default_server;      listen [::]:80 default_server;        root /var/www/html;        index index.html index.htm index.php;        server_name _;        location / {          try_files $uri $uri/ @gitlist =404;      }        location ~ \.php$ {          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;            # With php5-fpm:          fastcgi_pass unix:/var/run/php5-fpm.sock;            include /etc/nginx/fastcgi_params;      }        location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {          add_header Vary "Accept-Encoding";          expires max;          try_files $uri @gitlist;          tcp_nodelay off;          tcp_nopush on;      }        location @gitlist {          rewrite ^/.*$ /depot/index.php;      }  }  

I've got one solution from the Gitlist project itself here, but I don't seem to be able to set it right for my case, I always get a 404 when I try to view the project content.

Any suggestions? Thanks in advance

Amazon SES not sending to multiple recipients (AWS SDK for PHP)

Posted: 10 Jul 2022 07:01 PM PDT

I've set up Amazon SES on my server. I'm using AWS SDK for PHP. It's version one. Here's the documentation. Here is the code I'm using to send:

$to = $_POST['mailto'];     $response = $email->send_email(    $from, // Source (SENDER or FROM)    array('ToAddresses' => array( // Destination (RECIPIENT, or TO)        $to    )),  

In the AWSSDK docs, here is their example for sending emails to one person:

$response = $email->send_email(      'no-reply@amazon.com', // Source (aka From)      array('ToAddresses' => array( // Destination (aka To)          'nobody@amazon.com'      )),  

And to multiple people:

$response = $email->send_email(      'no-reply@amazon.com', // Source (aka From)      array( // Destination (aka To)          'ToAddresses' => array(              'nobody@amazon.com',              'nobody@amazon.com'          )),  

I can send to one person easily enough, but no matter what I do, I can't send to two people. I've tried making the recipients 'one@email.com', 'two@email.com', or one@email.com,two@email.com, but it doesn't work. I need the recipients to be in PHP on the page that has the form, so I can't hard code it into the sending PHP file.

It looks something like this.

<input type="hidden" value="one@email.com,two@email.com" id="mailto" name="mailto">  

Any help you can give towards a solution would be greatly appreciated!

And I'm out of the Sandbox with production emails enabled.

If I do directly edit the send file, it will send.

Using defaultAuthenticationType with PowerShell Web Access

Posted: 10 Jul 2022 07:01 PM PDT

PowerShell web access lets you choose the authentication type. By default, it uses a value of Default, which ends up being Negotiate. I have set up CredSSP to allow logging into the PSWA server itself with CredSSP, so that network authentication works from within the session (avoids a double hop issue, without delegating credentials all over the network).

Anyway, I want CredSSP to be the default option on the sign-in page.

Looking into the configuration options for the PSWA web app in IIS, there are several values that can be set to override the defaults.

One of them is called defaultAuthenticationType which is a string but is set to 0.

This seems like the right setting, but I can't get it to work.

If I inspect the sign in web page I can see that the select box has the following values:

0   Default  1   Basic  2   Negotiate  4   CredSSP  5   Digest  6   Kerberos  

3 is missing.

JosefZ found that 3 is NegotiateWithImplicitCredential according to this page, but on Windows PowerShell 5.1.15063.966 for me that name/value is missing from the enum.

If I set defaultAuthenticationType to a number, then the web page defaults to a new option:

7   Admin Specified  

I have tried 3 and 4, but neither one works. The login happens using Kerberos, and CredSSP is not used.

If I select CredSSP manually it works as expected.

If I set defaultAuthentcationType to a string like CredSSP, no Admin Specified option appears and it just defaults to Default again, and still Kerberos authentication is used.

Has anyone been able to successfully set this? Web results have been very lacking.

OpenLDAP proxy cache not retrieving entries

Posted: 10 Jul 2022 08:07 PM PDT

I need to set up a local LDAP proxy cache which connects to our central Active Directory server. OpenLDAP Proxy Cache looks just like the thing. But following the manpages as closely as possible, I am not able to get it working.

I am able to proxy requests through localhost to the remote server, but they are not cached (or the cache not retrieved, at least).

The steps I made:

  • Installed openldap-servers and openldap-clients packages
  • Created a slapd.conf config file (details below)
  • Created a directory for the proxy database and copied the default DB_CONFIG file there (details below)
  • Ran slapd -d -1 command to start the server
  • Queried the server using this command: ldapwhoami -vvv -h localhost -D "CN=Melka Martin,OU=(...),DC=int,DC=ourdomain,DC=com" -x -w <password>

The result is success. But sniffing network trafic shows the query is pooled from the central LDAP server.

The slapd output is pretty verbose, but it does at one point state

QUERY NOT ANSWERABLE  QUERY CACHEABLE  

Alas, if it does get cached, it is never answered. Any ideas what can be wrong?

"cn=admin,dc=int,dc=ourdomain,dc=com" is the DN of an admin user in the remote LDAP server. <something> is his password.

slapd.conf

database        ldap  suffix          "dc=int,dc=ourdomain,dc=com"  rootdn          "cn=admin,dc=int,dc=ourdomain,dc=com"  rootpw          <something>  uri             ldap://dc-04.int.ourdomain.com:389    overlay pcache  pcache         hdb 100000 1 1000 100  pcacheAttrset  0 *  pcacheTemplate (sn=) 0 3600  pcacheBind (sn=) 0 3600 sub dc=int,dc=ourdomain,dc=com    cachesize 200  directory /var/lib/ldap  index       objectClass eq  index       cn eq,sub  

DB_CONFIG

# $OpenLDAP$    # one 0.25 GB cache  set_cachesize 0 268435456 1    # Transaction Log settings  set_lg_regionmax 262144  set_lg_bsize 2097152  

The verbose log output: http://pastebin.com/9s8HMg7d

openssl cms not finding signer certificate

Posted: 10 Jul 2022 01:02 PM PDT

So I created a PKCS7 signed message and am trying to validate it with OpenSSL with the following command:

openssl cms -in demo.p7m -inform DER -verify  

Doing so returns me the following error:

140653850015376:error 2E09D08A:CMS routines:CMS_verify:signer certificate not found:cms_smime.c:353:  

I don't understand this error. Here's the output of openssl asn1parse -in demo.p7m -i -inform DER:

http://pastebin.com/AgkVbQjS

Here's the base64 encoded PKCS7:

http://pastebin.com/92mMPVw6

The X509 cert is as follows:

-----BEGIN CERTIFICATE-----  MIIB4zCCAU6gAwIBAgIAMAsGCSqGSIb3DQEBBTA5MRwwGgYDVQQKDBNwaHBzZWNsaWIgZGVtbyBj  ZXJ0MRkwFwYDVQQDDBB3d3cud2hhdGV2ZXIuY29tMCIYDzIwMTIwNjA0MDMxMDMxWhgPMjAxMzA2  MDQwMzEwMzFaMDkxHDAaBgNVBAoME3BocHNlY2xpYiBkZW1vIGNlcnQxGTAXBgNVBAMMEHd3dy53  aGF0ZXZlci5jb20wgZ0wCwYJKoZIhvcNAQEBA4GNADCBiQKBgQCtYr+TcpSQ043ZZi+akC1LR5Q6  MJPJ6/0MQ7IFPt/SCywaxsdFsNQ40+TOSFNkG68nscyB5nEPDkNzLJ7AklNSRHItqxTwohuW4a+f  BfzAi0vXS9IrM2iep13cHE9r5QW9pouRQiYfbi5FegEWbtIc5SrmAxHAH9K3KGRaXEeufwIDAQAB  MAsGCSqGSIb3DQEBBQOBgQBYEsMuWBA9ie4ulXxeLhLoQvEo6vgl5LDRFMuP+AhkKzfXUo2yEMWP  /QxbSglcPT/ycb+5+FhYGWxGatM5V+sB43ZBHZD14ZWPN35ePmDIfqXdRmphhXuhdNU7DWwp97ZR  c26CQXzHurRf29VloV8k5JKwsfnLRPVCrbJySMB6dg==  -----END CERTIFICATE-----  

The cert parses just fine with openssl x509 -in cert.txt -text -noout.

The cert is a self-signed cert. The issuer DN is as follows:

   92:d=6  hl=2 l=  57 cons:       SEQUENCE               94:d=7  hl=2 l=  28 cons:        SET                    96:d=8  hl=2 l=  26 cons:         SEQUENCE               98:d=9  hl=2 l=   3 prim:          OBJECT            :organizationName    103:d=9  hl=2 l=  19 prim:          UTF8STRING        :phpseclib demo cert    124:d=7  hl=2 l=  25 cons:        SET                   126:d=8  hl=2 l=  23 cons:         SEQUENCE              128:d=9  hl=2 l=   3 prim:          OBJECT            :commonName    133:d=9  hl=2 l=  16 prim:          UTF8STRING        :www.whatever.com  

That matches the issuer DN in the SignerInfo:

  782:d=14 hl=2 l=  57 cons:               SEQUENCE              784:d=15 hl=2 l=  28 cons:                SET                   786:d=16 hl=2 l=  26 cons:                 SEQUENCE              788:d=17 hl=2 l=   3 prim:                  OBJECT            :organizationName    793:d=17 hl=2 l=  19 prim:                  UTF8STRING        :phpseclib demo cert    814:d=15 hl=2 l=  25 cons:                SET                   816:d=16 hl=2 l=  23 cons:                 SEQUENCE              818:d=17 hl=2 l=   3 prim:                  OBJECT            :commonName    823:d=17 hl=2 l=  16 prim:                  UTF8STRING        :www.whatever.com  

Here's the serial number of the SignerInfo:

  841:d=12 hl=2 l=   1 prim:             INTEGER           :00  

This matches the serial number of the of the X509 cert:

   77:d=6  hl=2 l=   0 prim:       INTEGER           :00  

So why isn't it finding the signing cert?

FTP: ls timeout, even in passive mode

Posted: 10 Jul 2022 05:04 PM PDT

I'm having trouble listing files on ftp. I can connect properly, but ls doesn't seem to be working. Output after enabling debug mode is below --

ftp> ls  ftp: setsockopt (ignored): Permission denied  ---> PORT xx,xx,xx,xx,xx,xx  200 PORT command successful  ---> LIST  425 Unable to build data connection: Connection timed out  ftp> passive  Passive mode on.  ftp> ls  ftp: setsockopt (ignored): Permission denied  ---> PASV  227 Entering Passive Mode (xx,xx,xx,xx,xx,xx).  ---> LIST      ^C  421 Service not available, remote server has closed connection    receive aborted  waiting for remote to finish abort  ftp>  

This is happening only on my server (i.e, working perfectly from my local maching). So I'm guessing this has something to do with at the client end -- but I have no idea what.

Thanks in advance. Do comment if I should add more info.

Exim installed, can send mail but not receive any

Posted: 10 Jul 2022 06:04 PM PDT

I am trying to set up the mail service on my server. I installed exim4 and configured it. I can send emails to any email address, send one from a user to another but not receive any.

When I try to send one from gmail I get a mail from gmail daemon with the subject: Delivery Status Notification (Failure) stating Recipient address rejected: User unknown in relay recipient table

The user exists for sure because I replied to the mail I first sent from my server.

My MX lookup:

example.org mail is handled by 10 mx2.example.org  example.org mail is handled by 10 mx1.example.org  

Any idea on what is going wrong?

Thank you in advance

Sphinx searchd: failed to lock .spl file, no such file or directory

Posted: 10 Jul 2022 03:02 PM PDT

I use sphinx for indexing on my development environment, and it is working fine. But when i take it to the server. I can index and I have the indexes with search working on them, but everytime I run the command: searchd --config configfile , it gives me an error:

Failed to lock .spl file, no such file or directory. NOT SERVING  Fatal: no valid indexes to serve.  

I gave permissions to write to that directory, so I am pretty sure it is not a permission issue. I know I am not giving enough info about my case, but in general what could cause a file not to be locked? and is it possible to unlink it manually? or What could it be?

Help please, it's been two weeks of trying to solve it with no success. I am really frustrated. Thanks.

No comments:

Post a Comment