Recent Questions - Server Fault |
- Cannot Remote-SSH on VS-Code from a WSL2 Remote Folder
- Best practices to troubleshoot stuck php application due to internal curl calls to unresponsive endpoints
- How do you profile DNS response times on CentOS8?
- aws - ECS capacity provider permission
- Connect to en /ext host of Fortinet VPN with Ubuntu
- Task Scheduler GPO for purging files does not apply due to OneDrive
- windows remote shutdown & "The RPC server is unavailable"
- Linux equivalent to "query user /server:my-server-address"
- How to update Windows server 2019 desktop 1809 to 2004 on Google cloud compute engine?
- Can NGINX call another service before dispatching
- Org Policies error when creating a Cloud Function
- how to mark connections to route multiple gateways?
- How to fix "InvalidInput 400: Record name {} is not valid for hosted zone" on AWS?
- Nginx to point to different subdomains
- How to set maximum queue connection for nginx port in Windows?
- Connect MongoDB Compas to GCP Ubuntu VM
- Systemd: how to start a service after another, started by timer, finishes?
- mail issues "535 Authentication credentials invalid"
- Is the attack surface of Docker vs. LXC/systemd-nspawn significantly different?
- network drops, dhcp renewal fails
- Apache Reverse Proxy in front of RD Web Access IIS
- Can I use hosts.allow hosts.deny files to restrict access to my websites (Nginx)?
- Disabling cloud-init if metadata server cannot be reached
- Redis (error) NOAUTH Authentication required
- All External Mail to Office 365 Fails SPF, Marked as Junk by EOP in a Hybrid Deployment
- SFTP logging doesn't work
- NGINX + Codeigniter doesnt work can't access controllers and send parameters to controllers
- How to use hdparm to fix a pending sector?
- Remove apache's DirectoryIndex
- Connect to a Fortinet VPN with Ubuntu
Cannot Remote-SSH on VS-Code from a WSL2 Remote Folder Posted: 26 Nov 2021 08:17 AM PST I have installed VSCode (v1.62.3) using Windows (v10.0.19043), and have installed WSL2 (ubuntu 20.04) distribution. I reopen a folder in WSL and with WSL2 terminal i have setup my ssh keys (/home/user/.ssh/ssh-privatefile), so if i went to the WSL2 terminal within VSCode i can What i am trying to do from this WSL2 folder is perform a Remote-SSH connection to the very same Linux server, so i can see filesystem too I have set up a config file (/home/user/.ssh/configfile) and for the remote-ssh settings i am referencing this file. When i try to connect to this remote ssh in a new window i get an error of With the error of the below, which tells me is that VSCode is trying to reference a windows path Any ideas how to resolve? Thanks |
Posted: 26 Nov 2021 08:07 AM PST Recently I found myself with a website (Prestashop e-commerce on a Centos PHP-FPM /Apache / MySql machine ) that was down and not responding to web requests. After investigation, issue was due to an API call made with php-curl towards an endpoint that was temporarily offline, inside an application PHP file that was recalled in all pages of the website. The cURL call had been wrongly made without a CURLOPT_TIMEOUT_MS settings, so users visiting my website filled rapidly the maximum number of php connections, blocking the php-fpm processes and preventing my server to receive other incoming connections. I wonder if one can quickly and effectively prevent / identify such a problem "in production" from the terminal if it happens again (especially to quickly understand which is the blocked endpoint or identify the file from which the script that blocked the server is generated), since in my case I had to check the issue at "application level" rather than from server since :
Thanks for your help. |
How do you profile DNS response times on CentOS8? Posted: 26 Nov 2021 07:21 AM PST I would like to know how long exactly takes a DNS response to resolve an address - so I can compare different servers (my machine (I use The problem is that my app needs to resolve a URL which IP has just changed inside a CDN (e.g. Cloud Flare). And I need this resolve to be as fast as possible. So I would like to collect statistics on how fast are different DNS servers can resolve a URL to new IP. |
aws - ECS capacity provider permission Posted: 26 Nov 2021 07:15 AM PST I'm trying out teraform for managing my infrastructure and got into a bit of an issue and I'm not sure what to look for. I'm attempting to create a capacity provider for my ECS cluster however I'm getting the following error Below are my files: Launch config and autoscale group creation ECS Cluster and capacity provider creation I was able to create this from the console's GUI, however only terraform returns this error. Help would be greatly appreciated. Thanks in advance. |
Connect to en /ext host of Fortinet VPN with Ubuntu Posted: 26 Nov 2021 07:11 AM PST I have used openfrotivpn many times but I have never been able to properly configure such a vpn in the network manager. Unfortunately, now I have to connect to host [some ip]/ext (e.x. 192.168.10.10/ext) and i can't do that even via cli. It is possible only using 'original' FortiClient. How should I configure my network manager in this case? I want to be able to quick connect with this vpn just like as selecting wi-fi network. When I try to connect with it using cli, then openfortivpn throw an error. Additionaly in FortiClient GUI I have selected |
Task Scheduler GPO for purging files does not apply due to OneDrive Posted: 26 Nov 2021 06:02 AM PST I am trying to create a task with the following PS script: $locations="$env:userprofile\Desktop\New folder (2)","$env:userprofile\Desktop\New folder (3)" $Daysback = "-30" $CurrentDate = Get-Date $DatetoDelete = $CurrentDate.AddDays($Daysback) foreach ($location in $locations) {Get-ChildItem $location -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item} The new folders were created for testing, eventually it should be Downloads folder. When running script locally, using path C:\Users\name.lastname\Desktop... the script works fine and deletes the files in the correct directory. However, I have to check manually by going to C:\Users\name.lastname\Desktop... to find that out. The "New folder (2)" and "New folder (3)" on my desktop still have the files (which are older than 30 days as the script is written) after applying the GPO to my machine. When I check the folder path (C:\Users\name.lastname\OneDrive - tekexperts.onmicrosoft.com\Desktop) I started to suspect that the variable $env:userprofile is finding the One Drive synced folders on my Desktop and consequently not deleting anything. I would really appreciate it if somebody can advise if it is possible for my script to search for the exact proper system folders, instead of synced ones. Thank you in advance. |
windows remote shutdown & "The RPC server is unavailable" Posted: 26 Nov 2021 05:42 AM PST I have a windows computer in remote network, which i want to restart. I do not have physical access to that PC. Reason for desire to restart - Remote Desktop fails to connect - endless "configuring remote session". When I try to invoke
It means some connectivity is available, but something is preventing "remote shutdown" Question: any ideas regarding how this problem can be fixed remotely? Or ... is there some other way to perform remote restart? |
Linux equivalent to "query user /server:my-server-address" Posted: 26 Nov 2021 05:12 AM PST The command from the title displays all connected users for my terminal server. Is there an equivalent linux command to display the same information? |
How to update Windows server 2019 desktop 1809 to 2004 on Google cloud compute engine? Posted: 26 Nov 2021 04:57 AM PST I have created one virtual machine using Google cloud compute engine. I have researched on how to install windows 10 and I got to know that it is way more difficult to install. Google cloud only gives the possibility to install windows server 2019 datacenter with version 1809(with desktop version). I'd like to update it to version 2004. If it is possible. Can anyone help me? |
Can NGINX call another service before dispatching Posted: 26 Nov 2021 06:11 AM PST We are using NGINX as a reverse proxy, it dispatches the calls from outside to our internal Java microservices: We would like to add a special service which would serve as a "man-in-the middle", but only for the request part. It's purpose is to decorate the original request (authentication, add/modify HTTP headers, verify access rights). The "decorative tasks" involve a complicated business logic which cannot be configured on NGINX itself. We want the service to be called as first, and then forward its response (especially the HTTP headers!) as a request to one of the microservices. Maybe also optionally to call the dispatched services with the original body, but with the HTTP headers returned from the decorator service. When the service returns an HTTP error, it should return directly to the caller without dispatching. The service is implemented as a Java Spring Boot application. It is a regular web service. Is it possible to be configured in NGINX, and how? To be clear: I am not asking about how to implement this specific service. What I need is only to know if (and how) can NGINX be configured so it calls another service before dispatching the call, and that NGINX passes the headers (and maybe also body, but not necessarily) returned from this service to the call. |
Org Policies error when creating a Cloud Function Posted: 26 Nov 2021 04:08 AM PST When trying to create a "hello world" Cloud Function, I get the error message: "The request has violated one or more Org Policies. Please refer to the respective violations for more information." Now, which org policies have been violated? In the Log Explorer I find the error message like this: |
how to mark connections to route multiple gateways? Posted: 26 Nov 2021 08:35 AM PST hi i am having trouble setting up permanent routes for my network interfaces, i have : os : linux (centos 7) eth0 : IP 172.16.3.6 -- Gateway : 172.16.0.1 eth0:1 : IP 10.1.5.102 -- Gateway : 10.1.5.101 eth0:2 : IP 10.1.5.106 -- Gateway : 10.1.5.105 and i wanna to connect to : 10.10.10.1:5160 via 10.1.5.102 (Sip-Trunk Connection (udp)) 10.10.10.1:5161 via 10.1.5.106 (Sip-Trunk Connection (udp)) there is a one dst. IP but different port. so how can i mark and route connections ? (by default connections going with 172.16.3.6 IP address) |
How to fix "InvalidInput 400: Record name {} is not valid for hosted zone" on AWS? Posted: 26 Nov 2021 07:02 AM PST I'm migrating a site currently hosted on github pages to AWS. The site in question is on an aws instance, with an elastic ip of http://13.40.0.39/. The elastic IP does correctly show the site. I want the domain www.whitewaterwriters.com to show the site. I created a hosted zone in root 53. I changed the nameservers at my domain provider (qiq) and confirmed it later with whois: they have correctly changed. As per the screenshot I created a record to go to the correct IP address. However, when I access the site I get "Hmm, We're having trouble finding that site' and using the Dig command I get server fail. I tried 'test record' in AWS and got:
My question is: what causes this particualar error in AWS (which seems not to currently return any SE results)? and more generally, what would be the next step for debugging dns issues in AWS. |
Nginx to point to different subdomains Posted: 26 Nov 2021 07:01 AM PST I have a domain (example.com) that I am trying to use subdomains to show different parts of the applications. Right now is all Hosted in AWS. This is sort of the set up I am trying to go for. sbx.example.com trn.example.com office.example.com My nginx conf right now is as follow: I know that in AWS i would need to create a record for the domain, i believe that would be a NS record with the name of the subdomain (such as sbx.example.com). What I was thinking of doing is creating another repository (a clone) with the changes that i need for SBX, create another server block and under server name just change the subdomain? Thoughts? |
How to set maximum queue connection for nginx port in Windows? Posted: 26 Nov 2021 04:01 AM PST I am learning to design scalable system, for now using Windows machine. I created two servers that will listen to port 27016 and 27015, all they do is return "HelloWorld!" response. I had set listen(ListenSocket, SOMAXCONN) for both the servers when creating them in Visual studio following Winsock tutorial. Using jmter performed load test on each of them individually (1000 request per sec) and got everything OK. Now when I introduced nginx which is listening to port 80 and load balancing the requests (1000 req per sec) among the two servers I mentioned above, many requests are being dropped down while performing load test using jmeter. I am assuming that queue size for port 80 is not configured for high traffic and want to tune it. How to set the queue size to maximum possible value either from nginx config or cmd command? |
Connect MongoDB Compas to GCP Ubuntu VM Posted: 26 Nov 2021 07:03 AM PST I am a little lost on how should I host MongoDB on Ubuntu VM. I barely worked with Ubuntu before so I struggle to understand a lot of aspects. I followed guide: https://docs.mongodb.com/tutorials/install-mongodb-on-ubuntu/ And everything seemed to work in GCP VM SSH console. I was able to login to DB with admin login and password. But I don't understand why I cannot connect to it from external resources and how to debug the issue. I am trying to access DB with VM external IP, provided by GCP compute engine, I used existing authentication information but it doesn't work, all I get is this error after around a minute of waiting:
Is there any guide or advice to help me understand what exactly is wrong? I am lost and don't know what to check to even find the issue. |
Systemd: how to start a service after another, started by timer, finishes? Posted: 26 Nov 2021 08:02 AM PST The goalOn Debian Stretch, we want to download any updates and install them immediately because we monitor the number of pending upgrades and currently get notifications for routine conditions. Standard stretchStandard stretch downloads updates and installs them using:
The timers run independently at random daily times so there can be almost 24 hours between download and installation. Research and attemptsapt-daily-upgrade.service modificationBased on https://superuser.com/questions/1250874/trigger-another-systemd-unit-to-start-before-timer-unit-starts, I disabled apt-daily-upgrade.timer and modified apt-daily-upgrade.service, adding "Wants=apt-daily.service" Did run systemctl daemon-reload after the change. 24+ hours later /var/log/unattended-upgrades/unattended-upgrades.log did not show unattended-upgrades had been run. apt-daily.service modificationBased on Chaining custom systemd services, I understood apt-daily.service needs to be configured to start apt-daily-upgrade.service rather than apt-daily-upgrade.service being configured to start after apt-daily.service. Backed out the apt-daily-upgrade.service modification and created /etc/systemd/system/apt-daily.service with addition "Before=apt-daily-upgrade.service": Did run systemctl daemon-reload after the change. 24+ hours later /var/log/unattended-upgrades/unattended-upgrades.log did not show unattended-upgrades had been run. The systemd journal showed "Daily apt download activities" messages, believed to show that apt-daily.service had been run: ConclusionHow can we effect our goal, ideally within the systemd framework? |
mail issues "535 Authentication credentials invalid" Posted: 26 Nov 2021 06:01 AM PST I'm trying to send mails from command line with my 1and1 credentials. But if the following configuration worked to send mails from my nexcloud server :
The following configuration and command still fail : /etc/ssmtp/ssmtp.conf : /etc/ssmtp/revaliases : command line : Is there something I forgot? EDIT : I also tried with port 25 and without TLS |
Is the attack surface of Docker vs. LXC/systemd-nspawn significantly different? Posted: 26 Nov 2021 08:02 AM PST One decision point in choosing between and application container solution versus an OS container solution is security. I'm not knowledgeable enough to be able to compare and contrast the two. I assume they're different, but is one significantly more open than the other? |
network drops, dhcp renewal fails Posted: 26 Nov 2021 07:03 AM PST I'm struggling with a dhcp problem at work and can't seem to find a solution. I'm not sure if you can help me but I thought I could give it a try, as I have the feeling I tried almost everything in my power aha! Here's the thing. There are those four computers, which are basically exactly the same as any other computer in our place. Every 4 hour, connection drops. It happens every day and started a few weeks ago. There wasn't any change on our side: still the same server, still the same switch. We even changed one computer by a new one and the problem still occurred. What I understand here is that dhcp client tries to renew the lease after, say, 2 hours and 3 hours: lease isn't renewed and at the end connection drops. A few seconds later, computer broadcasts a brand new request and gets a ip address. We know that:
At this point we tried different things, including setting a reservation in dhcp server and see what happens. I don't know yet if that solved our problem. Still, I would like to know what caused that. Any thoughts on my issue? Thanks! Update Sorry I didn't answer sooner. IP reservation didn't work, as I expected. Something during the renewal process fails at some point. I'm going to dig on those leads you gave me. I can look up on switches and see what is going on there, but my rights on dhcp server are much more limited. I will try to give you some update as soon as I can. Wireshark may help: I'll launch it this afternoon on a client, at about the time of expected disconnection, see what happens. Update 2 Hi guys. Sad to say we didn't find anything relevant. Users have had this problem for a few months and they already have been very (...) very patient. I really can't ask them to wait any longer. So, static ip address it is. That's the only workaround I have. I keep my fingers crossed hoping that this issue won't spread to other workstations. Thank you guys for your help! |
Apache Reverse Proxy in front of RD Web Access IIS Posted: 26 Nov 2021 06:01 AM PST I have just configured a Microsoft Remote Desktop Services service on an internal Windows Server 2012 R2 server. I have access to RDP through outside the network with port forwarding. However, because I have additional web servers running on port 80/443, I can't expose RD Web Access running on IIS to directly to the internet. I have a reverse proxy configuration with Apache for all my internal sites, so I'm trying to use the same for RD Web Access. My configuration (for both HTTP & HTTPS) is as follows This configuration seems to work but has an issue. When connecting directly to I have tried adding Is there something I'm missing in my Apache Reverse Proxy configurations? |
Can I use hosts.allow hosts.deny files to restrict access to my websites (Nginx)? Posted: 26 Nov 2021 05:04 AM PST Is there any way how to use hosts.allow and hosts.deny files to restrict access to the websites (Nginx)? This does not work: I am going to use Cloudflare services and want allow connections to Nginx only from Cloudflare IP addresses but can't find a way how to do it. Yes, I know, I can allow/deny connections by IP address in Nginx configuration file but not in this case because I use: which converts Cloudflare IP addresses into users IP addresses and after that I can not know if the request is coming from Cloudflare. I want to allow connections only from Cloudflare but at the same time I want to know the real IP address of each http request. Any ideas? |
Disabling cloud-init if metadata server cannot be reached Posted: 26 Nov 2021 05:04 AM PST I'm trying to get cloud-init to not take any action if the metadata server cannot be reached. If cloud-init ignores the error and continues executing (which seems to be the default configuration), then it resets the host SSH key, administrative user password, etc., which is a problem if the virtual machine was being used already beforehand (if password login was configured, then users can no longer access the VM). I'm seeing this problem in two situations:
|
Redis (error) NOAUTH Authentication required Posted: 26 Nov 2021 04:46 AM PST I get the error: When in |
All External Mail to Office 365 Fails SPF, Marked as Junk by EOP in a Hybrid Deployment Posted: 26 Nov 2021 07:05 AM PST In short: legitimate emails are landing in Junk folders as EOP (Exchange Online Protection) stamps email messages as junk (SCL5) and SPF-failed. This happens with all external domains (e.g. gmail.com/hp.com/microsoft.com) to client's domain (contoso.com). Background info: We are at the beginning of migrating mailboxes to Office 365 (Exchange Online). This is a Hybrid Deployment/Rich-Coexistence configuration, where:
The problem is when external users sends emails to an Office 365 mailbox in the organization (mail flow: External -> Mail Gateway -> on-premises mail servers -> EOP -> Office 365), EOP performs an SPF lookup and hard/soft failing messages with the external facing IP address of the Mail Gateway from which it received the mail. (On-premises mailboxes do not show this problem; only mailboxes migrated to Office 365 do.) Example 1: from Microsoft to O365 Example 2: from HP to O365 Example 3: from Gmail to O365 For message headers with X-Forefront-Antispam-Report, refer to http://pastebin.com/sgjQETzM Note: 23.1.4.9 is the public IP address of the on-premises hybrid Exchange 2010 server connector to Exchange Online. How do we stop external emails from being marked as junk by EOP during coexistence stage of a Hybrid Deployment? [2015-12-12 Update] This issue was fixed by the Office 365 support (the escalated/backend team) as it has nothing to do with our settings. We were suggested the follows:
The key part is the third point. "If the TLS is not enabled, incoming email from local Exchange will not be marked as internal/trust email, and EOP will check all records," said the support. The support determined a TLS issue from our mail headers by the below line:
This indicates TLS was not enabled when EOP received email. EOP did not treat the incoming email as trust email. The correct one should be like:
However, this was not caused by our settings; the support person helped us make sure our settings were correct by verifying the verbose SMTP logs from our Exchange 2010 Hybrid server. At around the same time, their backend team fixed the problem without letting us know what exactly caused it (unfortunately). After they fixed it, we found that the message headers had some significant changes as below. For internal-originated mail from Exchange 2003 to Office 365:
And for external mails (e.g. gmail.com) to Office 365:
Although SPF check still soft-fails for gmail.com (external) to Office 365, the support person said it was OK, and all mails would go to the Inbox instead of Junk folder. As a side note, during troubleshooting, the backend team found one seemingly minor configuration issue -- we had the IP from our Inbound Connector (i.e. public IP of Exchange 2010 Hybrid server) defined in our IP Allow List (suggested by another Office 365 support person as a troubleshooting step). They let us know we should not need to do this and in fact doing so can cause routing issues. They commented that on initial pass the email were not getting marked as spam so there was also a possible issue here. We then removed the IP from the IP Allow List. (However, the spam issue existed before the IP Allow List setting was made. We didn't think Allow List was the cause.) In conclusion, "it should be EOP mechanism," said the support person. Therefore, the whole thing should be caused by their mechanism. For anyone interested, the troubleshooting thread with one of their support persons can be viewed here: https://community.office365.com/en-us/f/156/t/403396 |
Posted: 26 Nov 2021 04:01 AM PST I have some problemas trying to log sftp actions. I've configured some files but it doesn't write in the log file. I'm using Debian 7. User configuration:
/etc/ssh/sshd_config /etc/rsyslog.conf
ls -l /var/log/sftp.log I don't know what I'm doing wrong. I've restarted ssh and rsyslog services but it doesn't work. Any idea? Thanks in advance. |
NGINX + Codeigniter doesnt work can't access controllers and send parameters to controllers Posted: 26 Nov 2021 06:19 AM PST I want to run codeigniter under NGINX but it just doesnt work. I want to be able to send a parameters to controllers methods. But so far I can not even access controller using the address example.com/index.php/welcome.php. It says No input file specified. However when I type example.com/index.php/ it redirects to welcome controller. I would say I have tried all nginx config files on the internet. this is my ngnix config this is my welcome controller I want to be able to call function a from welcome controller and get response. Basicaly I want to build a REST server and this is my routes.php The important line is $route['api/v1/(:any)'] = "api_v1/$1"; I want this to work. This is my codeigniter config.php Please help me somehow. Thank you very much |
How to use hdparm to fix a pending sector? Posted: 26 Nov 2021 05:05 AM PST SMART is stating one pending sector on of my server's hdd. I've read lot's of articles recommending using hdparm to "easily" force the disk to relocated the bad sector, but I can't find the correct way to use it. Some info from my "smartctl": With that "bad LBA" (48059863) in hand, how do I use hdparm? What type of address the parameters "--read-sector" and "--write-sector" should have? If I issue the command hdparm --read-sector 48095863 /dev/sda it reads and dumps data. If this command was right, I should expect an I/O error, right? Instead, it dumps data: |
Remove apache's DirectoryIndex Posted: 26 Nov 2021 07:15 AM PST Is it possible to turn apache's |
Connect to a Fortinet VPN with Ubuntu Posted: 26 Nov 2021 06:15 AM PST I don't know a lot about VPNs but I'd like to connect to a Fortinet VPN with Ubuntu. I can connect on Windows using Forticlient just by entering the policy server (vpn.theserver.com) and then it asks for a user/password. I use IPSec. |
You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment