Recent Questions - Server Fault |
- GCP internal http(s) load balancer timeout after 300s
- patroni.exceptions.PatroniFatalException: 'Failed to bootstrap cluster'
- Mounting an ocfs2 partition (with multipath removed)
- yaml.parser.ParserError: while parsing a block mapping
- Downgrade from 2019 standard to 2019 essentials
- What triggers a redeploy in a multi container azure app service?
- Apache log search keywords are not listed in referrer
- How to install PHP-intl on Centos PHP 7.2
- Taking input on remote vm
- SSH Tunnel - Performant alternatives for exposing a local port to remote machines?
- Create load balancer on GCP (GAE). SSL cert domains stuck in FAILED state
- Why isn't ssh-agent a background process by default?
- Error in job agent when running a package in SSISDB catalog (SQL Server)
- Problems with DNS and IPv6 on Server 2012 and 2016
- NAT gateway set up doesn't work
- ldapsearch: How do I query pwdLastSet and have it make sense?
- nomad shows job as dead
- HP SmartArray Rebuild Logical Drive without data loss
- How can I move packages between Spacewalk channels?
- Adding multiple users from a group to an Active Directory group using Powershell
- BIRD iBGP - Route not reachable
- nginx proxied responses terminating/truncating
- Roundcube domain change for a single account
- Network Services Write/Modify Permissions on Inetpub Folder Potiential security issues
- Preventing access to apache sites via server ip in browser
- truncated headers from varnish configuration file
- Assign multiple IP addresses in NAT mode to the same VM
- BAD stripes in controller and windows utility to remap bad blocks
GCP internal http(s) load balancer timeout after 300s Posted: 24 May 2021 09:57 PM PDT Recently, I've a problem with the internal http(s) load balancer on GCP, about the timeout of backend service (an instance group). After 300 seconds, the API calling to LB will be failed with 408 HTTP response. I have an internal HTTPS load balancer (LB) on GCP. First, I call a quick API through LB, that worked normally. Then I set timeout for backend service to 10 seconds then call the slow API (say, 500 seconds to complete the request). The error response happens after 10 seconds as expected. But when I set the timeout to 1000 seconds and call the slow API, i receive the error timeout response after only exactly 300 seconds. I also increate the Connection draining timeout to 1000 seconds, but it still doesn't work. Is the any parameter i need to set to allow API with reply time more than 300 seconds? Thank you. |
patroni.exceptions.PatroniFatalException: 'Failed to bootstrap cluster' Posted: 24 May 2021 09:47 PM PDT I have a problem with patroni right here Here is my patroni.yml Can someone help? |
Mounting an ocfs2 partition (with multipath removed) Posted: 24 May 2021 08:37 PM PDT (I previously posted this on AskUbuntu, but I've since deleted that since no one replied and I now think it's server-related and probably more suitable for here.) I look after a set of servers running Ubuntu 20.04 which are attached to a disk array that has been formatted as an When the same programs with the same inputs write to a externally mounted ext4 filesystem, this does not occur. As far as I can tell, the main difference between the two is the use of (Indeed, if this doesn't solve the problem, then I will need to try something else.) I disabled
When I restarted, I saw this in I guess I have never seen this before. In the Does something have to be done on either the disk array or on the server? I was thinking of just trying I tried the Does this mean I made a mistake in the very beginning and this disk array should have been made as a gpt partition? Is that why I don't see What are my options? The partition currently has user data, so I'm hesitant to just trying to mount without asking for opinions. It's only partially used, so I can probably find space to move files around to make room, if it is necessary. |
yaml.parser.ParserError: while parsing a block mapping Posted: 24 May 2021 08:40 PM PDT I followed this document while configuring patroni and I have a problem with patroni.service file like this https://snapshooter.com/learn/postgresql/postgresql-cluster-patroni Here is my patroni.yml And I got this error: |
Downgrade from 2019 standard to 2019 essentials Posted: 24 May 2021 06:05 PM PDT We have a physical server in a datacenter. It is running Server 2016 Standard. We created a virtual server with Hyper-V as 2019 Essentials. The datacenter staff authenticated the VM using one of the 2 free virtual machines allowed with a standard license. Somehow, during the authentication, the version was upgraded to 2019 standard. Is there any way other than setting up another VM as essentials and have the datacenter transfer the license? We would rather use essentials for the 25 cals that come with it. I have found various conflicting articles and I was hoping someone on this forum would have encountered this or a similar situation. |
What triggers a redeploy in a multi container azure app service? Posted: 24 May 2021 05:43 PM PDT I have an azure app service configured to be multi-container like so: And I have continuous deployment enabled. The two containers are pushed to the acr via github actions. It SEEMS like pushing to the ACR triggers the containers to get re pulled and recreated. I just can't find where the documentation says that. So I need to understand:
|
Apache log search keywords are not listed in referrer Posted: 24 May 2021 05:06 PM PDT Search keywords/phrases are not appearing in the referrer link in the Apache log. Here's the log format in the apache2.conf (default) : To test the referrer capture, I entered a keyword to search on bing.com and google.com, the web site was found and listed, and I clicked the links. I expect to see the search keyword in the referrer URL. In the access.log I only see the referrer but search keywords or parameters are missing in the URL: |
How to install PHP-intl on Centos PHP 7.2 Posted: 24 May 2021 04:26 PM PDT My PHP 7.2 application is giving the error We are running:
I tried to install PHP-intl and it seemed to work as now if I try again I get this: I have reloaded Apache but the extension is not loaded:
How do I get from having php72-php-intl-7.2.34-4.el7.remi.x86_64 to having the extension actually installed and enabled in PHP? |
Posted: 24 May 2021 03:30 PM PDT below is my use case , I have to take a input from a user for a specific command which need to run on a remote machine.But my problem is I am not able to access that machine directly so what I am doing Ssh to othere server and from their I am doing ssh to that server. But at that server we are not able to take any inout from user. Below is a sample script:- It does not ask for user input and dirctly jump to scp command. Also is this a correct way to pass a variable in a curl command Any help will be appreciated! |
SSH Tunnel - Performant alternatives for exposing a local port to remote machines? Posted: 24 May 2021 03:50 PM PDT I have Server A running at home in a Carrier-Grade NAT environment. Due to this I cannot open a port directly in my router. I also have Server B on a cloud hosting provider, which has a static public IP address that can have ports opened to. Both machines run Ubuntu linux. So far the best solution for me was SSH Tunneling, but it comes with a performance drawback known as TCP-over-TCP. I have tried several solutions and each one of them had some problems:
I also need the original IP address of the client accessing the port through Server B to be visible to Server A. What software should I try? |
Create load balancer on GCP (GAE). SSL cert domains stuck in FAILED state Posted: 24 May 2021 09:58 PM PDT I'm new to this topic. We're using GCP (App Engine, standard) to host one nodejs application. However, for different reasons we decided to create two services - stage and default (think as the same app running in parallel). Default one is connected with custom domain (GAE provided SSL cert) and working properly. The stage service can be accessed with google generated URL (stage-dot-example.appspot.com) and obviously protected with ssl certificates. Then, we had to go through security review from our partners and we used stage for this. The result is we have to disable of TLSv1.0 and TLSv1.1. With GAE - we need to create Load Balancer and switch SSL policies to the TLS specific. The problem: to create External HTTPS load balancer - you have to create SSL Certificate resource (i.e. you have to own domain). With custom domain I guess it should not be hard but how do I do this for stage? Do I use my stage domain (...appspot.com) in SSL Certificate resource? If so - what do I do with DNS records and external IP (you need to switch IP to external IP in A and AAAA records)? Or if I'm doing something wrong - could you point me to the right direction? UPDATE + UPDATE 2 I decided to go to the path proposed by Wojtek_B. So I verified stage.example.com and it worked fine without Load Balancer. At that point, my DNS Records include 4 A and 4 AAAA records from @ with google provided IPs, and 3 CNAME records (www, stage, www.stage) pointing to "ghs.googlehosted.com." Next, I created SSL certificate resource with 4 domains: example.com, www.example.com, stage.example.com, www.stage.example.com. Then I added an External HTTPS Load Balancer (with external IP, for example, 1.2.3.4 and SSL cert mentioned above). I added new A records for @, www, stage, and www.stage to point to 1.2.3.4. I've dropped CNAME records because they are excessive. After waiting for 2-3 hours (TTL is 1/2 hour) all subdomains were activated except for example.com (stuck in FAILED_NOT_VISIBLE). ANSWER I've been fighting managed SSL certificate getting stuck in provisioning state for a while. I followed this tutorial where you're supposed to create external IP (v4) only. But I also had 4 AAAA records (got those during domain verification) with (obviously) ipv6. So I tried to reserve external IP (v6) and it took less than minute to push all 4 (sub)domains to the active state. In just a few minutes both services through LB were up and running with required TLS configs. |
Why isn't ssh-agent a background process by default? Posted: 24 May 2021 04:07 PM PDT I have always wondered why such a basic feature (loading ssh keys for persistent usage) requires a clunky command to execute in the background of a cmdline. Why isn't ssh-agent a service (for example) by default? I assume there might be a security reason, but I'm curious to get other thoughts. |
Error in job agent when running a package in SSISDB catalog (SQL Server) Posted: 24 May 2021 10:05 PM PDT I created a simple package in visual studio. Then I deployed this package in SSISDB catalog which is on a different server. I am using Job agent to execute this package with a proxy account. But I am getting this error in Job Agent: "Excuted as user: *\test**. Microsoft (R) SQL Server execute Package Utility Version 13.0.5153.0 for 64-bit Copyright 2016 Microsoft. All rights reserved. Failed to execute IS server package because of error 0x80131904. Server: ****, Package path: ****.dtsx , Environment reference Id: NULL. Description: Login failed for user: '*\test'**. Source: .Net SqlClient Data Provider. .... The package execution failed. The step failed." Kindly help me with identifying this issue. |
Problems with DNS and IPv6 on Server 2012 and 2016 Posted: 24 May 2021 08:03 PM PDT I manage hundreds of servers for many customers. Most are SMB segment, having 1 to 3 servers per customer max. In past few weeks I get more and more frequent DNS errors on random domain controllers, from 2008R2 to 2016. Simply put, DC does not resolve DNS anymore. This happened on some dozen of servers lately, and I haven't figured out the cause yet. Weird is, that for example, on same premises, 2 VMs, 2 domain controllers for 2 different companies, each with 15 users. Same ISP, same router, same switch. 1 DC works OK, no problems, while 2nd DC cannot resolve DNS anymore: On server 1 problem local DNS... but nslookup to 8.8.8.8 works!?: On server 2 no problems: Both are AD DC in single-domain setup, DNS configured with public forwarders, DC DNS points to itself only. IPv4 and IPv6 enabled on servers, but IPv6 is disabled on router. Did not touch any of those servers for past few months. Did MS change anything? I do not remember DNS ever before switched to IPv6....why did it switch now? And why it works on one server and not on the other, still they are both the same (actually, same deployment, just configured for 2 different domains). |
NAT gateway set up doesn't work Posted: 24 May 2021 06:01 PM PDT I have implemented an environment for a Wordpress website. I have a loadbalncer (ALB) that is connected to an EC2 server plus an AutoScaling group(for failover). As we need to update Wordpress and install some plugins on EC2 server, I need to set up NAT gateway to allow internet connection from wordpress server(ec2) to internet. I have set up the NAT gateway like the following link: However I could not be able to reach the Website (DNS of ALB) afterward. My Routetable set up looks like below: The Public route is assigned to two public subnets which are connected to the LoadBalancer. The Private route B,C are assigned to private subnets: Web server()EC2 plus database-RDS. Any help would be appreciated. |
ldapsearch: How do I query pwdLastSet and have it make sense? Posted: 24 May 2021 10:05 PM PDT Hi all I have same problem as here: enter link description here, but I need it for ldapserach not in Powershell. my command: output: |
Posted: 24 May 2021 09:01 PM PDT I am new to nomad. We are using nomad in production along with docker, consul, vault, ansible. When I run a nomad job from my local machine, Job registers successfully but show status as dead. When I do nomad status And particularly when I check status of the job it gives error. I would highly appreciate any direction for troubleshooting above. Thanks |
HP SmartArray Rebuild Logical Drive without data loss Posted: 24 May 2021 03:03 PM PDT First, let me explain our situation: We had an HP server with a RAID 1 over 2 drives (for the OS) and a RAID 5 over 4 drives (for VM data). The raid recently failed. I say the word failed very loosely because to this day I still don't know what exactly caused it. We thought at first a failed drive was the problem. But, after running the HP Offline ACU the status of the drive originally thought to be failed is indeed showing no errors. However, we have another drive (which originally didn't show any errors) now showing a warning that it might fail soon. Now, here is my question: Can we see how the raid was configured before and configure it exactly as is and not lose any data? Or are we going to lose data no matter what? The backups we have are not completely up-to-date so I want to try to restore data back to how it was at all costs. I am just not sure if when I recreate the logical volume (exactly how it was) if my data will still be purged? Is there a command line utility in ACU Offline that will allow me to recreate a logical volume that will NOT purge data if the GUI will? Thanks for your advice. |
How can I move packages between Spacewalk channels? Posted: 24 May 2021 07:02 PM PDT How I can move a set of packages from one channel to another? I haven't found any option in the interface that'd let me do this. My current workflow is to download the rpms, delete them from the channel (which appears to delete them from all of Spacewalk!), and then re-upload them with rhnpush. For obvious reasons this is clunky as hell. I must be missing something. Which also makes me ask: How does one remove a package from a channel and not delete it from the entire System? |
Adding multiple users from a group to an Active Directory group using Powershell Posted: 24 May 2021 05:04 PM PDT I have a powershell script that is supposed to go through a specific ou and store the groups into a variable $groups. Here is the code I use in the script: This step seems to work fine. In my next part of my script I have it go through each group and attempt to add the users from each group into a group where they should all be combined. The code is as follows: When I run the script, it works fine for all users from: but for all other users I get the following error: Does anyone know if this is a permissions issue or if there is something I am doing wrong? |
BIRD iBGP - Route not reachable Posted: 24 May 2021 05:04 PM PDT i have two machines participating in the DN42 network, a darknet driven by the Chaos Computer Club and others to play around with advanced routing techniques like BGP and stuff. The machines are connected via an OpenVPN connection and can ping each other. Now, my challenge is: Since i own both machines, i'd like to establish an iBGP connection between them, so that they work in the same AS. I already have successful BGP peerings with other AS, but in this specific case, the propagated routes are marked as unreachable. If that helps: One of the machine is a debian server, the other one is an OpenWRT router. I am using BIRD to get the BGP connections. I am quite new to this advanced routing stuff, so i would be glad to receive some help. This is the example BIRD config from one of the machines, the config on the other machine looks similar. As you can see, i have one /24 assigned to my AS, and i want to split it into two /25 subnets, the first assigned to my home machine, the second assigned to a server in a datacenter. So, now the routes between both machines are imported and exported, but birdc shows them as unreachable, and the kernel routing table shows them without any interface identifier. When i try to reach through the connection to ping some peers of the other machine, the network is unreachable...so, please help me. |
nginx proxied responses terminating/truncating Posted: 24 May 2021 03:03 PM PDT I have this nginx config on a host (host1): The backend nginx config looks like this: The backend is actually on a different host (host2), and due to our firewall config, I need to reverse-tunnel the connections (this may be irrelevant to the question but including for completeness): This setup is for serving large-ish files (GBs). The problem is I'm finding that requests are abruptly terminating/truncating short of their full size, always a bit >1GB (e.g. 1081376535, 1082474263, ...). No errors in the logs however, and nothing jumps out from verbose debug logging. There's always a Content-Length in the response too. After some digging I found that there's a Anyway, just wondering if anyone might have an inkling as to what's up. If it makes a difference, both hosts are Ubuntu 12.04. [1] I say mostly because the problem is replaced by another one, which is that downloads all stop at exactly 2147484825 bytes, which happens to be 0x80000499, but I haven't done enough debugging to determine if this is between (as I suspect) the frontend server and the client. |
Roundcube domain change for a single account Posted: 24 May 2021 07:02 PM PDT I have inherited a server that is running some ugly roundcube php interface for mail accounts. I would like to change a domain for specific account to custom domain that I have set up on my DNS. I have added users to system and created aliases in my postfix configuration. In file |
Network Services Write/Modify Permissions on Inetpub Folder Potiential security issues Posted: 24 May 2021 08:03 PM PDT I have an asp.net web application that runs as a Content Management system for a website. It was originally on an IIS6 Server. However the web application was recently moved to an II7 Server. After the move, my CMS System no longer worked as I received an error message: Access to the path 'C:\inetpub\SITEFOLDER\FILENAME' is denied. Every time I save a page in my CMS system I need to write to an aspx file on the server as well as update the web.sitemap file on the server. I compared the file permissions between the two servers and noticed that the old IIS6 server had Write/Modify permissions on the inetpub folder for the "Network Services" user that carried down to all the site files on the server. The new server does not have "Network Services" with Write/Modify permissions set on the inetpub folder. Which is obviously the reason why my CMS system isn't working on the new server. The solution is, of course, to set the Write/Modify permissions on the Inetpub folder on the new server so that ASP.NET has the proper permissions to write and modify any file within the site. My question is, is this an exceptionable solution on a production web server. Are there any security holes I am opening up by allowing Network Services to have Write/Modify permissions on the inetpub folder? I know giving the IUSER write/modify permissions on the entire inetpub folder can lead to security issues and you have to be careful what files are given write permissions for the IUSER, I just didn't know if the same rule applied to the "Network Services" user. |
Preventing access to apache sites via server ip in browser Posted: 24 May 2021 09:01 PM PDT I'm trying to set up an apache webserver to serve multiple websites, and I'd like to be able to block access via the ip address of the server itself. i.e. if someone was to put in Where and how do I go about configuring this, or is this a bad idea? |
truncated headers from varnish configuration file Posted: 24 May 2021 04:03 PM PDT I need help with Varnish, I have a varnish configuration file as default.vcl. I can see from the output of varnishstat that hit ratio is quite high. I've also checked varnishtop -i txurl to see what are the requests going to backend. Now, the problem is in http headers the X-Cache header is missing and the other varnish headers. From the default.vcl there is an option to delete those headers But I need help on how do I keep those headers in http response from varnish itself. My default.vcl file |
Assign multiple IP addresses in NAT mode to the same VM Posted: 24 May 2021 06:01 PM PDT I'm aware that VirtualBox assigns the same IP to different VMs (by default 10.0.2.15), however I have a case where I have a VM with a single vNIC, and a Linux namespace inside the VM attempts to get an IP from DHCP, and it gets the same IP from VirtualBox. So both Is there a way to have VirtualBox correctly return different DHCP responses or is it a limitation of VirtualBox that it doesn't correctly handle DHCP requests coming from different source MAC addresses from the same VM? |
BAD stripes in controller and windows utility to remap bad blocks Posted: 24 May 2021 04:03 PM PDT Ok, here is my story. I have 3 disks raid 5, one of the disks made a few surface errors and I didn't know. I tried to repair the corrupted database table in mysql and the machine froze on write. I rebooted with hard reset. It seems the other 2 disks were writing data. They had exchanged the bad HD so the physical HD's are ok without problems. The logical array in Adaptec 3405 controller shows bad stripes. CHKDSK /f removed some tmp files that were bad. But I have few files that I can not copy on this drive (I/O error) and I guess they are located on the "bad stripes" section of the disk. The server is production one. I can not change it at the moment, the disk was changed and the files that do not work have no importance to the server. Now my question is: The NTFS is behaving like if the disks would have surface errors because Adaptec controller marked them as non readable. I know I can rebuild the array but I can not because it is productions server (I will move data to another server, it will take time). The problem is not critical as the bad sector disk is replaced. The other non readable files that are in bad stripe I don't need. But I asked adaptec if the new files will not have I/O error because the adapter marks them as bad and they told me: NO. It is a dangerous situation now because at any moment database could be writeen to bad stripe (bad block from the OS perspective). I would only need a tool that makes surface scan and put all the bad blocks (there are 32 the imaging backup program told me) into one "bad" file and the new files will not be able to write there. The disks won't degrade more as the hard drive errors are not there any more, the faulty disk was swapped. I know when the disks are failing it is not good to remap bad blocks, but my situation is fixed now, I just need to map bad blocks to some atrificial bad.txt file for example that would reside on the disk so nothing is ever tried to be written there. I hope I was clear enough. I can not find such software, I've found a surface scanner but it does not make a bad file out of it :-( Ok, I will not delete my bad 3 files but there may be other sectors from the bad stripe in controller and I'd like a new file to be made pointing to this sectors so no more damage can occur in writes. I do not need to read those files at all... |
You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment