Friday, November 26, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Cannot Remote-SSH on VS-Code from a WSL2 Remote Folder

Posted: 26 Nov 2021 08:17 AM PST

I have installed VSCode (v1.62.3) using Windows (v10.0.19043), and have installed WSL2 (ubuntu 20.04) distribution.

I reopen a folder in WSL and with WSL2 terminal i have setup my ssh keys (/home/user/.ssh/ssh-privatefile), so if i went to the WSL2 terminal within VSCode i can ssh username@IP to a linux instance just fine with a connection.

What i am trying to do from this WSL2 folder is perform a Remote-SSH connection to the very same Linux server, so i can see filesystem too

I have set up a config file (/home/user/.ssh/configfile) and for the remote-ssh settings i am referencing this file.

Host SomeName      Hostname IP      User username      IdentityFile /home/user/.ssh/ssh-privatefile  

When i try to connect to this remote ssh in a new window i get an error of "Could not establish connection to SomeName"

With the error of the below, which tells me is that VSCode is trying to reference a windows path no such identity: /home/user/.ssh/ssh-privatefile: No such file or directory

Any ideas how to resolve?

Thanks

[16:02:13.461] Log Level: 2  [16:02:13.546] remote-ssh@0.66.1  [16:02:13.547] win32 x64  [16:02:13.547] SSH Resolver called for "ssh-remote+7b22686f73744e616d65223a226261734a50556174227d", attempt 1  [16:02:13.548] "remote.SSH.useLocalServer": false  [16:02:13.548] "remote.SSH.showLoginTerminal": false  [16:02:13.548] "remote.SSH.remotePlatform": {}  [16:02:13.548] "remote.SSH.path": undefined  [16:02:13.549] "remote.SSH.configFile": /home/user/.ssh/configfile  [16:02:13.549] "remote.SSH.useFlock": true  [16:02:13.549] "remote.SSH.lockfilesInTmp": false  [16:02:13.549] "remote.SSH.localServerDownload": auto  [16:02:13.549] "remote.SSH.remoteServerListenOnSocket": false  [16:02:13.549] "remote.SSH.showLoginTerminal": false  [16:02:13.549] "remote.SSH.defaultExtensions": []  [16:02:13.549] "remote.SSH.loglevel": 2  [16:02:13.549] SSH Resolver called for host: basJPUat  [16:02:13.549] Setting up SSH remote "basJPUat"  [16:02:13.579] Using commit id "ccbaa2d27e38e5afa3e5c21c1c7bef4657064247" and quality "stable" for server  [16:02:13.582] Install and start server if needed  [16:02:21.163] Checking ssh with "ssh -V"  [16:02:21.250] > OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2    [16:02:21.255] Using SSH config file "/home/user/.ssh/configfile"  [16:02:21.255] Running script with connection command: ssh -T -D 53694 -F "/home/user/.ssh/configfile" "basJPUat" bash  [16:02:21.258] Terminal shell path: C:\Windows\System32\cmd.exe  [16:02:21.514] > ]0;C:\Windows\System32\cmd.exe  [16:02:21.514] Got some output, clearing connection timeout  [16:02:21.589] > The authenticity of host 'IP (IP)' can't be established.  > ECDSA key fingerprint is SHA256:LAFCfhMRJbGsIkeEH6Iy5YfVRtCKGMxIP+6peEvd5f0.    > Are you sure you want to continue connecting (yes/no/[fingerprint])?  [16:02:21.589] Detected fingerprint confirmation message  [16:02:21.590] Showing fingerprint confirmation dialog  [16:02:23.264] Got fingerprint response: yes  [16:02:23.265] "install" wrote data to terminal: "yes"  [16:02:23.277] > y  [16:02:23.299] > Are you sure you want to continue connecting (yes/no/[fingerprint])? yes          > Failed to add the host to the list of known hosts (C:\\Users\\my-user/.ssh/kn  > own_hosts).  [16:02:23.328] > no such identity: /home/user/.ssh/ssh-privatefile: No such file or directory  [16:02:23.346] > userName@IP: Permission denied (publickey).  > The process tried to write to a nonexistent pipe.  >      [16:02:24.625] "install" terminal command done  [16:02:24.625] Install terminal quit with output:      [16:02:24.625] Received install output:      [16:02:24.626] Resolver error: Error:       at Function.Create (c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:429193)      at c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:427209      at Object.t.handleInstallOutput (c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:427772)      at Object.t.tryInstall (c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:521703)      at processTicksAndRejections (internal/process/task_queues.js:93:5)      at async c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:485356      at async Object.t.withShowDetailsEvent (c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:488706)      at async Object.t.resolve (c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:486435)      at async c:\Users\my-user\.vscode\extensions\ms-vscode-remote.remote-ssh-0.66.1\out\extension.js:1:560057  [16:02:24.632] ------  

Best practices to troubleshoot stuck php application due to internal curl calls to unresponsive endpoints

Posted: 26 Nov 2021 08:07 AM PST

Recently I found myself with a website (Prestashop e-commerce on a Centos PHP-FPM /Apache / MySql machine ) that was down and not responding to web requests.

After investigation, issue was due to an API call made with php-curl towards an endpoint that was temporarily offline, inside an application PHP file that was recalled in all pages of the website.

The cURL call had been wrongly made without a CURLOPT_TIMEOUT_MS settings, so users visiting my website filled rapidly the maximum number of php connections, blocking the php-fpm processes and preventing my server to receive other incoming connections.

I wonder if one can quickly and effectively prevent / identify such a problem "in production" from the terminal if it happens again (especially to quickly understand which is the blocked endpoint or identify the file from which the script that blocked the server is generated), since in my case I had to check the issue at "application level" rather than from server since :

  • launching "top" the server shows the list of blocked php-fpm processes without any additional information to understand the problem (also server load average was about 0.00 since there's was almost no activity due to stuck connections).
  • Launching "netstat -nputw" show me a lot of internal connections in TIME_WAIT status, but again no information about the outage "culprit" (could I see the endpoint called up by php-curl with netstat or a similar network command ?)
  • Launching a "strace" of the php-fpm processes I see a lot of involved files, but this is not very helpful since the site, with average traffic, opens dozens and dozens of files.
  • The webserver logs only informed me of timeout connections to web resources, but not of the script containing the problematic cURL call.

Thanks for your help.

How do you profile DNS response times on CentOS8?

Posted: 26 Nov 2021 07:21 AM PST

I would like to know how long exactly takes a DNS response to resolve an address - so I can compare different servers (my machine (I use named), GCP DNS, other public DNS servers).

The problem is that my app needs to resolve a URL which IP has just changed inside a CDN (e.g. Cloud Flare). And I need this resolve to be as fast as possible.

So I would like to collect statistics on how fast are different DNS servers can resolve a URL to new IP.

aws - ECS capacity provider permission

Posted: 26 Nov 2021 07:15 AM PST

I'm trying out teraform for managing my infrastructure and got into a bit of an issue and I'm not sure what to look for.

I'm attempting to create a capacity provider for my ECS cluster however I'm getting the following error

ClientException: The capacity provider could not be created because you do not have autoscaling:CreateOrUpdateTags permissions to create tags on the Auto Scaling group  

Below are my files:

Launch config and autoscale group creation

resource "aws_launch_configuration" "ecs_launch_configuration" {      name = "ecs_launch_configuration"      image_id = "ami-0fe19057e9cb4efd8"      user_data = "#!/bin/bash\necho ECS_CLUSTER=ecs_cluster >> /etc/ecs/ecs.config"      security_groups = [aws_security_group.vpc_securityGroup.id]      iam_instance_profile = aws_iam_instance_profile.iam_role_profile.name      key_name = "key_pair_name"      instance_type = "t2.small"  }    resource "aws_autoscaling_group" "ecs_autoScale_group" {      name                      = "ecs_autoScale_group"      desired_capacity          = 1      min_size                  = 1      max_size                  = 2      launch_configuration = aws_launch_configuration.ecs_launch_configuration.name      vpc_zone_identifier = [aws_subnet.vpc_subnet_public.id]      tag {          key                 = "AmazonECSManaged"          value               = true          propagate_at_launch = true      }  }  

ECS Cluster and capacity provider creation

resource "aws_ecs_cluster" "ecs_cluster"{      name = "ecs_cluster"      capacity_providers = [ aws_ecs_capacity_provider.ecs_capacity_provider.name ]  }    resource "aws_ecs_capacity_provider" "ecs_capacity_provider" {      name = "ecs_capacity_provider"      auto_scaling_group_provider {          auto_scaling_group_arn = aws_autoscaling_group.ecs_autoScale_group.arn          managed_scaling {              maximum_scaling_step_size = 2              minimum_scaling_step_size = 1              status                    = "ENABLED"              target_capacity           = 1          }      }  }  

I was able to create this from the console's GUI, however only terraform returns this error.

Help would be greatly appreciated.

Thanks in advance.

Connect to en /ext host of Fortinet VPN with Ubuntu

Posted: 26 Nov 2021 07:11 AM PST

I have used openfrotivpn many times but I have never been able to properly configure such a vpn in the network manager.

Unfortunately, now I have to connect to host [some ip]/ext (e.x. 192.168.10.10/ext) and i can't do that even via cli. It is possible only using 'original' FortiClient.

How should I configure my network manager in this case? I want to be able to quick connect with this vpn just like as selecting wi-fi network.

When I try to connect with it using cli, then openfortivpn throw an error.

# vpn.cfg  host=192.168.10.10/ext  port=443  username=me  password=mypass  thrusted-cert=mytrhustedcert  
$ sudo openfortivpn -c vpn.cfg   ERROR:  getaddrinfo: Name or service not known  INFO:   Closed connection to gateway.  ERROR:  connect: Connection refused  INFO:   Could not log out.  

Additionaly in FortiClient GUI I have selected Do not Warn Invalid Server Certificate option.

Task Scheduler GPO for purging files does not apply due to OneDrive

Posted: 26 Nov 2021 06:02 AM PST

I am trying to create a task with the following PS script: $locations="$env:userprofile\Desktop\New folder (2)","$env:userprofile\Desktop\New folder (3)" $Daysback = "-30" $CurrentDate = Get-Date $DatetoDelete = $CurrentDate.AddDays($Daysback) foreach ($location in $locations) {Get-ChildItem $location -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item}

The new folders were created for testing, eventually it should be Downloads folder. When running script locally, using path C:\Users\name.lastname\Desktop... the script works fine and deletes the files in the correct directory. However, I have to check manually by going to C:\Users\name.lastname\Desktop... to find that out. The "New folder (2)" and "New folder (3)" on my desktop still have the files (which are older than 30 days as the script is written) after applying the GPO to my machine. When I check the folder path (C:\Users\name.lastname\OneDrive - tekexperts.onmicrosoft.com\Desktop) I started to suspect that the variable $env:userprofile is finding the One Drive synced folders on my Desktop and consequently not deleting anything. I would really appreciate it if somebody can advise if it is possible for my script to search for the exact proper system folders, instead of synced ones. Thank you in advance.

windows remote shutdown & "The RPC server is unavailable"

Posted: 26 Nov 2021 05:42 AM PST

I have a windows computer in remote network, which i want to restart. I do not have physical access to that PC.

Reason for desire to restart - Remote Desktop fails to connect - endless "configuring remote session".

When I try to invoke shutdown /f /g /m \\192.168.x.x after around 30 seconds I get message "The RPC server is unavailable.(1722)"

  • I can successfully ping that remote PC
  • I was able connect to it using mmc to enable "Auto sign in" policy and it was successfully changed
  • Remote tasklist /s 192.168.x.x fails with the same 'The RPC server is unavailable'

It means some connectivity is available, but something is preventing "remote shutdown"

Question: any ideas regarding how this problem can be fixed remotely? Or ... is there some other way to perform remote restart?

Linux equivalent to "query user /server:my-server-address"

Posted: 26 Nov 2021 05:12 AM PST

The command from the title displays all connected users for my terminal server. Is there an equivalent linux command to display the same information?

How to update Windows server 2019 desktop 1809 to 2004 on Google cloud compute engine?

Posted: 26 Nov 2021 04:57 AM PST

I have created one virtual machine using Google cloud compute engine.

I have researched on how to install windows 10 and I got to know that it is way more difficult to install.

Google cloud only gives the possibility to install windows server 2019 datacenter with version 1809(with desktop version). I'd like to update it to version 2004.

If it is possible. Can anyone help me?

Can NGINX call another service before dispatching

Posted: 26 Nov 2021 06:11 AM PST

We are using NGINX as a reverse proxy, it dispatches the calls from outside to our internal Java microservices:

enter image description here

We would like to add a special service which would serve as a "man-in-the middle", but only for the request part. It's purpose is to decorate the original request (authentication, add/modify HTTP headers, verify access rights). The "decorative tasks" involve a complicated business logic which cannot be configured on NGINX itself.

We want the service to be called as first, and then forward its response (especially the HTTP headers!) as a request to one of the microservices. Maybe also optionally to call the dispatched services with the original body, but with the HTTP headers returned from the decorator service.

When the service returns an HTTP error, it should return directly to the caller without dispatching.

The service is implemented as a Java Spring Boot application. It is a regular web service.

Is it possible to be configured in NGINX, and how?

To be clear: I am not asking about how to implement this specific service. What I need is only to know if (and how) can NGINX be configured so it calls another service before dispatching the call, and that NGINX passes the headers (and maybe also body, but not necessarily) returned from this service to the call.

enter image description here

Org Policies error when creating a Cloud Function

Posted: 26 Nov 2021 04:08 AM PST

When trying to create a "hello world" Cloud Function, I get the error message:

"The request has violated one or more Org Policies. Please refer to the respective violations for more information."

Now, which org policies have been violated? In the Log Explorer I find the error message like this:

{  insertId: "XXX"  logName: "projects/XXX/logs/cloudaudit.googleapis.com%2Factivity"  protoPayload: {10}  receiveTimestamp: "2021-11-26T11:42:16.735011108Z"  resource: {2}  severity: "ERROR"  timestamp: "2021-11-26T11:42:16.490247Z"  }  

how to mark connections to route multiple gateways?

Posted: 26 Nov 2021 08:35 AM PST

hi i am having trouble setting up permanent routes for my network interfaces,

i have :

os : linux (centos 7)

eth0 : IP 172.16.3.6 -- Gateway : 172.16.0.1

eth0:1 : IP 10.1.5.102 -- Gateway : 10.1.5.101

eth0:2 : IP 10.1.5.106 -- Gateway : 10.1.5.105

and i wanna to connect to :

10.10.10.1:5160 via 10.1.5.102 (Sip-Trunk Connection (udp))

10.10.10.1:5161 via 10.1.5.106 (Sip-Trunk Connection (udp))

there is a one dst. IP but different port.

so how can i mark and route connections ? (by default connections going with 172.16.3.6 IP address)

How to fix "InvalidInput 400: Record name {} is not valid for hosted zone" on AWS?

Posted: 26 Nov 2021 07:02 AM PST

I'm migrating a site currently hosted on github pages to AWS.

The site in question is on an aws instance, with an elastic ip of http://13.40.0.39/. The elastic IP does correctly show the site.

I want the domain www.whitewaterwriters.com to show the site. I created a hosted zone in root 53.

enter image description here

I changed the nameservers at my domain provider (qiq) and confirmed it later with whois: they have correctly changed. As per the screenshot I created a record to go to the correct IP address.

However, when I access the site I get "Hmm, We're having trouble finding that site' and using the Dig command I get server fail.

I tried 'test record' in AWS and got:

enter image description here

Error occurred Bad request. (InvalidInput 400: Record name 'https://www.whitewaterwriters.com/' is not valid for hosted zone: https\072\057\057www.whitewaterwriters.com\057.)

My question is: what causes this particualar error in AWS (which seems not to currently return any SE results)? and more generally, what would be the next step for debugging dns issues in AWS.

Nginx to point to different subdomains

Posted: 26 Nov 2021 07:01 AM PST

I have a domain (example.com) that I am trying to use subdomains to show different parts of the applications. Right now is all Hosted in AWS.

This is sort of the set up I am trying to go for.

sbx.example.com trn.example.com office.example.com

My nginx conf right now is as follow:

    server {          listen 80;          root /var/www/html/apt-front/dist;                # Add index.php to the list if you are using PHP          index index.html index.htm index.nginx-debian.html;                server_name example.com www.example.com;                     location / {                  try_files $uri $uri/ /index.php$is_args$args;               }                location /api{            alias "/var/www/html/api/public";            try_files $uri $uri/ @api;                location ~ \.php$ {                  fastcgi_split_path_info ^(.+\.php)(/.+)$;              fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;                      fastcgi_index index.php;                      include fastcgi_params;                      fastcgi_param SCRIPT_FILENAME /var/www/html/mint-api/public/index.php;          }           }                   location @api {                  rewrite /api/(.*)$ /api/index.php?/$1 last;             }                      include /etc/nginx/sites-available/*.conf;    }  

I know that in AWS i would need to create a record for the domain, i believe that would be a NS record with the name of the subdomain (such as sbx.example.com). What I was thinking of doing is creating another repository (a clone) with the changes that i need for SBX, create another server block and under server name just change the subdomain? Thoughts?

How to set maximum queue connection for nginx port in Windows?

Posted: 26 Nov 2021 04:01 AM PST

I am learning to design scalable system, for now using Windows machine. I created two servers that will listen to port 27016 and 27015, all they do is return "HelloWorld!" response. I had set listen(ListenSocket, SOMAXCONN) for both the servers when creating them in Visual studio following Winsock tutorial. Using jmter performed load test on each of them individually (1000 request per sec) and got everything OK.

Now when I introduced nginx which is listening to port 80 and load balancing the requests (1000 req per sec) among the two servers I mentioned above, many requests are being dropped down while performing load test using jmeter.

I am assuming that queue size for port 80 is not configured for high traffic and want to tune it. How to set the queue size to maximum possible value either from nginx config or cmd command?

Connect MongoDB Compas to GCP Ubuntu VM

Posted: 26 Nov 2021 07:03 AM PST

I am a little lost on how should I host MongoDB on Ubuntu VM. I barely worked with Ubuntu before so I struggle to understand a lot of aspects.

I followed guide: https://docs.mongodb.com/tutorials/install-mongodb-on-ubuntu/

And everything seemed to work in GCP VM SSH console. I was able to login to DB with admin login and password. But I don't understand why I cannot connect to it from external resources and how to debug the issue.

I am trying to access DB with VM external IP, provided by GCP compute engine, I used existing authentication information but it doesn't work, all I get is this error after around a minute of waiting:

Could not connect to MongoDB on the provided host and port

Is there any guide or advice to help me understand what exactly is wrong? I am lost and don't know what to check to even find the issue.

Systemd: how to start a service after another, started by timer, finishes?

Posted: 26 Nov 2021 08:02 AM PST

The goal

On Debian Stretch, we want to download any updates and install them immediately because we monitor the number of pending upgrades and currently get notifications for routine conditions.

Standard stretch

Standard stretch downloads updates and installs them using:

  • apt-daily.service
  • apt-daily.timer
  • apt-daily-upgrade.service
  • apt-daily-upgrade.timer

The timers run independently at random daily times so there can be almost 24 hours between download and installation.

Research and attempts

apt-daily-upgrade.service modification

Based on https://superuser.com/questions/1250874/trigger-another-systemd-unit-to-start-before-timer-unit-starts, I disabled apt-daily-upgrade.timer and modified apt-daily-upgrade.service, adding "Wants=apt-daily.service"

Did run systemctl daemon-reload after the change.

24+ hours later /var/log/unattended-upgrades/unattended-upgrades.log did not show unattended-upgrades had been run.

apt-daily.service modification

Based on Chaining custom systemd services, I understood apt-daily.service needs to be configured to start apt-daily-upgrade.service rather than apt-daily-upgrade.service being configured to start after apt-daily.service.

Backed out the apt-daily-upgrade.service modification and created /etc/systemd/system/apt-daily.service with addition "Before=apt-daily-upgrade.service":

[Unit]  Description=Daily apt download activities  Documentation=man:apt(8)  ConditionACPower=true  After=network-online.target  Wants=network-online.target  Before=apt-daily-upgrade.service    [Service]  Type=oneshot  ExecStart=/usr/lib/apt/apt.systemd.daily update  

Did run systemctl daemon-reload after the change.

24+ hours later /var/log/unattended-upgrades/unattended-upgrades.log did not show unattended-upgrades had been run.

The systemd journal showed "Daily apt download activities" messages, believed to show that apt-daily.service had been run:

# journalctl | grep 'apt download activities'  Apr 08 16:10:05 web2.iciti.av systemd[1]: Starting Daily apt download activities...  Apr 08 16:10:08 web2.iciti.av systemd[1]: Started Daily apt download activities.  ...  

Conclusion

How can we effect our goal, ideally within the systemd framework?

mail issues "535 Authentication credentials invalid"

Posted: 26 Nov 2021 06:01 AM PST

I'm trying to send mails from command line with my 1and1 credentials.

But if the following configuration worked to send mails from my nexcloud server :

  • send mode : smtp
  • encryption : SSL/TLS
  • from address : contact@mydomain.fr
  • Authentication method : login
  • Authentication required : yes
  • server address : auth.smtp.1and1.fr:465
  • Credentials : contact@mydomain.fr - mypassword

The following configuration and command still fail :

/etc/ssmtp/ssmtp.conf :

AuthUser=contact@mydomain.fr  AuthPass=mypassword  mailhub=auth.smtp.1and1.fr:465  rewriteDomain=mydomain.fr  UseTLS=YES  

/etc/ssmtp/revaliases :

vmonteco:contact@mydomain.fr:auth.smtp.1and1.fr:465  

command line :

$ echo test | mail -v -s "Test" mymail@hotmail.fr  mail: Loading /etc/mail.rc  mail: No such file to load: /home/vmonteco/.mailrc  mail: Setting up PseudoRandomNumberGenerator: *SSL RAND_*  [<-] 220 kundenserver.de (mreue105) Nemesis ESMTP Service ready  [->] EHLO localhost.localdomain  [<-] 250 SIZE 69920427  [->] AUTH LOGIN  [<-] 334 VXNlcm5hbWU6  [->] Y29udGFjdEB2b250ZWNvLm5pbmph  [<-] 334 UGFzc3dvcmQ6  [<-] 535 Authentication credentials invalid  sendmail: Authorization failed (535 Authentication credentials invalid)  

Is there something I forgot?

EDIT :

I also tried with port 25 and without TLS

Is the attack surface of Docker vs. LXC/systemd-nspawn significantly different?

Posted: 26 Nov 2021 08:02 AM PST

One decision point in choosing between and application container solution versus an OS container solution is security. I'm not knowledgeable enough to be able to compare and contrast the two. I assume they're different, but is one significantly more open than the other?

network drops, dhcp renewal fails

Posted: 26 Nov 2021 07:03 AM PST

I'm struggling with a dhcp problem at work and can't seem to find a solution. I'm not sure if you can help me but I thought I could give it a try, as I have the feeling I tried almost everything in my power aha!

Here's the thing. There are those four computers, which are basically exactly the same as any other computer in our place. Every 4 hour, connection drops. It happens every day and started a few weeks ago. There wasn't any change on our side: still the same server, still the same switch. We even changed one computer by a new one and the problem still occurred.

What I understand here is that dhcp client tries to renew the lease after, say, 2 hours and 3 hours: lease isn't renewed and at the end connection drops. A few seconds later, computer broadcasts a brand new request and gets a ip address.

We know that:

  • Our dhcp lease lasts 4 hours.
  • We have one dhcp server on this location.
  • The dhcp pool has got PLENTY of addresses available.

At this point we tried different things, including setting a reservation in dhcp server and see what happens. I don't know yet if that solved our problem. Still, I would like to know what caused that.

Any thoughts on my issue? Thanks!

Update

Sorry I didn't answer sooner. IP reservation didn't work, as I expected. Something during the renewal process fails at some point.

I'm going to dig on those leads you gave me. I can look up on switches and see what is going on there, but my rights on dhcp server are much more limited.

I will try to give you some update as soon as I can. Wireshark may help: I'll launch it this afternoon on a client, at about the time of expected disconnection, see what happens.

Update 2

Hi guys.

Sad to say we didn't find anything relevant.

Users have had this problem for a few months and they already have been very (...) very patient. I really can't ask them to wait any longer.

So, static ip address it is. That's the only workaround I have. I keep my fingers crossed hoping that this issue won't spread to other workstations.

Thank you guys for your help!

Apache Reverse Proxy in front of RD Web Access IIS

Posted: 26 Nov 2021 06:01 AM PST

I have just configured a Microsoft Remote Desktop Services service on an internal Windows Server 2012 R2 server. I have access to RDP through outside the network with port forwarding. However, because I have additional web servers running on port 80/443, I can't expose RD Web Access running on IIS to directly to the internet.

I have a reverse proxy configuration with Apache for all my internal sites, so I'm trying to use the same for RD Web Access. My configuration (for both HTTP & HTTPS) is as follows

<VirtualHost *:80>      ServerName foo.example.com        ProxyPass / http://192.168.1.xxx/      ProxyPassReverse / http://192.168.1.xxx/  </VirtualHost>  

This configuration seems to work but has an issue. When connecting directly to foo.example.com, I get the default IIS page, as expected. However, when accessing http://foo.example.com/RDWeb/, the URL gets changed to http://192.168.1.xxx/RDWeb, which I obviously can't access from outside of my network. I need it to stay as http://foo.example.com/RDWeb/.

I have tried adding ProxyPreserveHost On to my apache configuration, when I do that, I get an infinite redirect loop, so that doesn't work either. I'm pretty sure that this is NOT an IIS issue, because if I set my local host file to point foo.example.com to 192.168.1.xxx, it works without issue.

Is there something I'm missing in my Apache Reverse Proxy configurations?

Can I use hosts.allow hosts.deny files to restrict access to my websites (Nginx)?

Posted: 26 Nov 2021 05:04 AM PST

Is there any way how to use hosts.allow and hosts.deny files to restrict access to the websites (Nginx)?

This does not work:

hosts.allow file content:  nginx: SOME-IP-ADDRESS    hosts.deny file content:  nginx: ALL  

I am going to use Cloudflare services and want allow connections to Nginx only from Cloudflare IP addresses but can't find a way how to do it.

Yes, I know, I can allow/deny connections by IP address in Nginx configuration file but not in this case because I use:

set_real_ip_from CLOUDFLARE_IP_ADDRESSES;  real_ip_header CF-Connecting-IP;  

which converts Cloudflare IP addresses into users IP addresses and after that I can not know if the request is coming from Cloudflare. I want to allow connections only from Cloudflare but at the same time I want to know the real IP address of each http request.

Any ideas?

Disabling cloud-init if metadata server cannot be reached

Posted: 26 Nov 2021 05:04 AM PST

I'm trying to get cloud-init to not take any action if the metadata server cannot be reached. If cloud-init ignores the error and continues executing (which seems to be the default configuration), then it resets the host SSH key, administrative user password, etc., which is a problem if the virtual machine was being used already beforehand (if password login was configured, then users can no longer access the VM).

I'm seeing this problem in two situations:

  • The metadata server goes down
  • Software is installed that blocks connections to the metadata server during boot (most recently, seeing this with ubuntu-desktop)

Redis (error) NOAUTH Authentication required

Posted: 26 Nov 2021 04:46 AM PST

I get the error:

(error) NOAUTH Authentication required.  

When in redis-cli and trying to display the KEYS *. I've only set a requirepass not an auth afaiac. I'm in the redis.conf but do not know what to do.

All External Mail to Office 365 Fails SPF, Marked as Junk by EOP in a Hybrid Deployment

Posted: 26 Nov 2021 07:05 AM PST

In short: legitimate emails are landing in Junk folders as EOP (Exchange Online Protection) stamps email messages as junk (SCL5) and SPF-failed. This happens with all external domains (e.g. gmail.com/hp.com/microsoft.com) to client's domain (contoso.com).

Background info:

We are at the beginning of migrating mailboxes to Office 365 (Exchange Online). This is a Hybrid Deployment/Rich-Coexistence configuration, where:

  • On-Premises = Exchange 2003 (Legacy) & 2010 (Installed for Hybrid Deployment)
  • Off-Premises = Office 365 (Exchange Online)
  • EOP is configured for SPF checking.
  • MX records are pointing at the on-premises as we haven't completed migrating all mailboxes from on-premises to Exchange Online.

The problem is when external users sends emails to an Office 365 mailbox in the organization (mail flow: External -> Mail Gateway -> on-premises mail servers -> EOP -> Office 365), EOP performs an SPF lookup and hard/soft failing messages with the external facing IP address of the Mail Gateway from which it received the mail.

(On-premises mailboxes do not show this problem; only mailboxes migrated to Office 365 do.)

An illustration: Mail Flow from External to Office 365 with EOP

Example 1: from Microsoft to O365

Authentication-Results: spf=fail (sender IP is 23.1.4.9)   smtp.mailfrom=microsoft.com; contoso.mail.onmicrosoft.com;  dkim=none (message not signed) header.d=none;  Received-SPF: Fail (protection.outlook.com: domain of microsoft.com does not designate  23.1.4.9 as permitted sender) receiver=protection.outlook.com; client-ip=23.1.4.9;   helo=exchange2010.contoso.com; X-MS-Exchange-Organization-SCL: 5  

Example 2: from HP to O365

Authentication-Results: spf=none (sender IP is 23.1.4.9)   smtp.mailfrom=hp.com; contoso.mail.onmicrosoft.com; dkim=none   (message not signed) header.d=none; Received-SPF: None   (protection.outlook.com: hp.com does not designate permitted sender hosts)   X-MS-Exchange-Organization-SCL: 5  

Example 3: from Gmail to O365

Authentication-Results: spf=softfail (sender IP is 23.1.4.9)   smtp.mailfrom=gmail.com; contoso.mail.onmicrosoft.com;   dkim=fail (signature did not verify) header.d=gmail.com;   Received-SPF: SoftFail (protection.outlook.com: domain of transitioning   gmail.com discourages use of 23.1.4.9 as permitted sender)    X-MS-Exchange-Organization-SCL: 5  

For message headers with X-Forefront-Antispam-Report, refer to http://pastebin.com/sgjQETzM

Note: 23.1.4.9 is the public IP address of the on-premises hybrid Exchange 2010 server connector to Exchange Online.

How do we stop external emails from being marked as junk by EOP during coexistence stage of a Hybrid Deployment?


[2015-12-12 Update]

This issue was fixed by the Office 365 support (the escalated/backend team) as it has nothing to do with our settings.

We were suggested the follows:

  1. Whitelist the public IP in EOP Allow List (Tried. It did not help.)
  2. Add SPF record for our domain: "v=spf1 ip4:XXX.XXX.XXX.XXX ip4:YYY.YYY.YYY.YYY include:spf.protection.outlook.com -all" (Don't think this suggestion is valid as EOP should not check gmail.com against our SMTP IP address as it is not specified in the SPF records of gmail.com. That does not seem the way SPF works.)
  3. Make sure TLS is enabled (See below)

The key part is the third point. "If the TLS is not enabled, incoming email from local Exchange will not be marked as internal/trust email, and EOP will check all records," said the support.

The support determined a TLS issue from our mail headers by the below line:

  • X-MS-Exchange-Organization-AuthAs: Anonymous

This indicates TLS was not enabled when EOP received email. EOP did not treat the incoming email as trust email. The correct one should be like:

  • X-MS-Exchange-Organization-AuthAs: Internal

However, this was not caused by our settings; the support person helped us make sure our settings were correct by verifying the verbose SMTP logs from our Exchange 2010 Hybrid server.

At around the same time, their backend team fixed the problem without letting us know what exactly caused it (unfortunately).

After they fixed it, we found that the message headers had some significant changes as below.

For internal-originated mail from Exchange 2003 to Office 365:

  • X-MS-Exchange-Organization-AuthAs: Internal (It was "Anonymous")

  • SCL=-1 (It was SCL=5)

  • Received-SPF: SoftFail (It was the same)

And for external mails (e.g. gmail.com) to Office 365:

  • X-MS-Exchange-Organization-AuthAs: Anonymous (It was the same)

  • SCL=1 (It was SCL=5)

  • Received-SPF: SoftFail (It was the same)

Although SPF check still soft-fails for gmail.com (external) to Office 365, the support person said it was OK, and all mails would go to the Inbox instead of Junk folder.

As a side note, during troubleshooting, the backend team found one seemingly minor configuration issue -- we had the IP from our Inbound Connector (i.e. public IP of Exchange 2010 Hybrid server) defined in our IP Allow List (suggested by another Office 365 support person as a troubleshooting step). They let us know we should not need to do this and in fact doing so can cause routing issues. They commented that on initial pass the email were not getting marked as spam so there was also a possible issue here. We then removed the IP from the IP Allow List. (However, the spam issue existed before the IP Allow List setting was made. We didn't think Allow List was the cause.)

In conclusion, "it should be EOP mechanism," said the support person. Therefore, the whole thing should be caused by their mechanism.

For anyone interested, the troubleshooting thread with one of their support persons can be viewed here: https://community.office365.com/en-us/f/156/t/403396

SFTP logging doesn't work

Posted: 26 Nov 2021 04:01 AM PST

I have some problemas trying to log sftp actions. I've configured some files but it doesn't write in the log file.

I'm using Debian 7.

User configuration:

sftpuser02:x:1003:1003:SFTPUSER02:/home/www:/usr/lib/sftp-server

/etc/ssh/sshd_config

Subsystem sftp /usr/lib/openssh/sftp-server -f LOCAL5 -l VERBOSE  Match Group sftpusers          ChrootDirectory %h          ForceCommand internal-sftp          AllowTcpForwarding no          X11Forwarding no  

/etc/rsyslog.conf

local5.* /var/log/sftp.log

ls -l /var/log/sftp.log

-rw-r----- 1 root root 0 ago  3 15:16 /var/log/sftp.log  

I don't know what I'm doing wrong. I've restarted ssh and rsyslog services but it doesn't work.

Any idea?

Thanks in advance.

NGINX + Codeigniter doesnt work can't access controllers and send parameters to controllers

Posted: 26 Nov 2021 06:19 AM PST

I want to run codeigniter under NGINX but it just doesnt work. I want to be able to send a parameters to controllers methods. But so far I can not even access controller using the address example.com/index.php/welcome.php. It says No input file specified. However when I type example.com/index.php/ it redirects to welcome controller. I would say I have tried all nginx config files on the internet.

this is my ngnix config

  server {        ## Your website name goes here.        server_name compute.amazonaws.com;        ## Your only path reference.        root /var/www/;        listen 80;        ## This should be in your http block and if it is, it's not needed here.        index index.html index.htm index.php;                location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {                    expires max;                    log_not_found off;            }               # location / {                    # This is cool because no php is touched for static content                  #try_files $uri $uri/ /index.php?q=$uri&$args;              #   try_files $uri $uri/ /index.php;           #           location / {                try_files $uri $uri/ /index.php?$args;                if (!-e $request_filename) {                      rewrite ^/(.*)$ /index.php?q=$1 last;                }              }              location ~ \.php$ {                #try_files $uri =404;                #fastcgi_split_path_info ^(.+\.php)(/.+)$;                fastcgi_split_path_info ^(.+\.php)(.*)$;        #       # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini        #        #       # With php5-cgi alone:        #       fastcgi_pass 127.0.0.1:9000;        #       # With php5-fpm:                fastcgi_pass unix:/var/run/php5-fpm.sock;                fastcgi_index index.php;                include fastcgi_params;                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;            }    }       

this is my welcome controller

           <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');               class Welcome extends CI_Controller {                function __construct()              {                  parent::__construct();              }                function index()              {                  $this->load->helper('url');                  $this->load->view('welcome_message');              }              function a($a)              {                  echo "AHOJ ";                  echo $a;              }             }  

I want to be able to call function a from welcome controller and get response. Basicaly I want to build a REST server and this is my routes.php

     $route['default_controller'] = "welcome";       $route['404_override'] = '';       $route['test_welcome'] = 'welcome';       $route['api/v1/(:any)'] = "api_v1/$1";       $route['item/(:any)'] = "item/show_item/$1";         $route['scaffolding_trigger'] = "";  

The important line is $route['api/v1/(:any)'] = "api_v1/$1"; I want this to work.

This is my codeigniter config.php

           $config['base_url']  = '';                            $config['index_page'] = '';                                           $config['uri_protocol']  = 'REQUEST_URI';  

Please help me somehow. Thank you very much

How to use hdparm to fix a pending sector?

Posted: 26 Nov 2021 05:05 AM PST

SMART is stating one pending sector on of my server's hdd. I've read lot's of articles recommending using hdparm to "easily" force the disk to relocated the bad sector, but I can't find the correct way to use it.

Some info from my "smartctl":

Error 95 occurred at disk power-on lifetime: 20184 hours (841 days + 0 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 00 d7 55 dd 02  Error: UNC at LBA = 0x02dd55d7 = 48059863      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 08 d6 55 dd e2 00  18d+05:13:42.421  READ DMA    27 00 00 00 00 00 e0 00  18d+05:13:42.392  READ NATIVE MAX ADDRESS EXT    ec 00 00 00 00 00 a0 02  18d+05:13:42.378  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 02  18d+05:13:42.355  SET FEATURES [Set transfer mode]    27 00 00 00 00 00 e0 00  18d+05:13:42.327  READ NATIVE MAX ADDRESS EXT     SMART Self-test log structure revision number 1   Num  Test_Description    Status                  Remaining  LifeTime(hours)        LBA_of_first_error   # 1  Extended offline    Completed: read failure       90%     20194         48059863   # 2  Short offline       Completed without error       00%     15161         -  

With that "bad LBA" (48059863) in hand, how do I use hdparm? What type of address the parameters "--read-sector" and "--write-sector" should have?

If I issue the command hdparm --read-sector 48095863 /dev/sda it reads and dumps data. If this command was right, I should expect an I/O error, right?

Instead, it dumps data:

$ ./hdparm --read-sector 48059863 /dev/sda    /dev/sda:  reading sector 48059863: succeeded  4b50 5d1b 7563 a932 618d 1f81 4514 2343  8a16 3342 5e36 2591 3b4e 762a 4dd7 037f  6a32 6996 816f 573f eee1 bc24 eed4 206e  (...)  

Remove apache's DirectoryIndex

Posted: 26 Nov 2021 07:15 AM PST

Is it possible to turn apache's DirectoryIndex off for specific directory using .htaccess file? I've tried DirectoryIndex Off but if file Off exists it uses it as index file. I want to make sure that the directory will be listed no matter what files are inside.

Connect to a Fortinet VPN with Ubuntu

Posted: 26 Nov 2021 06:15 AM PST

I don't know a lot about VPNs but I'd like to connect to a Fortinet VPN with Ubuntu.

I can connect on Windows using Forticlient just by entering the policy server (vpn.theserver.com) and then it asks for a user/password. I use IPSec.

No comments:

Post a Comment