Recent Questions - Server Fault |
- Does storage engine ndbcluster support Data-at-Rest Encryption?
- AWS EKS add-on coredns status as degraded and node group creation failed( is unable to join Cluster)
- did you know about carpartsnow.ng
- Centos server logs "systemd: Started Telnet Server (127.0.0.1:52050)." every minute
- Unbound error - unbound.service: Start request repeated too quickly
- Scuttle Boot Option
- Getting ERR_NAME_NOT_RESOLVED only from MY PC and only from WIFI
- Passive ftp not working behind nat
- Windows Server Backup Fails: There is not enough disk space to create the volume shadow copy
- Ubuntu 18.04.4 Zimbra 8.8.15 warning: connect to transport private/smtp-amavis: Connection refused
- Can something in my Nginx config imply why my backend is not sending the 'Access-Control-Allow-Origin' header in POST request?
- iSCSI separation from Ethernet via VLAN
- How to upgrade rhel 7.3 to 8.1 using iso cd
- Problems with setting up bonding on Netplan (Ubuntu server 18.04)
- Scheduled Task in Windows Server 2016, run by non-admin Users
- Dockerfile cloning from private gitlab with ssh and deploy key
- Win 2012 R2 / IIS 8.5 intermittent Connection Refused
- AWS API Gateway Custom Domain: the domain you provided is already associated with an existing CloudFront distribution
- Empty nginx logs
- NGINX subdomain with proxy_pass
- Tomcat startup script on RHEL not starting tomcat on reboot
- How to modify querystring using URL rewriting?
- iptables to allow input and output traffic to and from web server only
- PAM LDAP configuration for non-local user authentication
- How to filter TCP packets based on flags using Packet Filter
- Concatenating files to a virtual file on Linux
- Error 503 Service Unavailable Varnish
- save Performance Monitor settings
- Troubleshooting 'Could Not Start' scheduled task error:
| Does storage engine ndbcluster support Data-at-Rest Encryption? Posted: 12 Sep 2021 09:16 PM PDT I want to enable Data-at-Rest Encryption in ndbcluster. I try to find how to make that but it doesn't solve this problem. Is there another way ? (I care about every solution. Although MySQL Enterprise Production is required.) My Environment Ubuntu 21.04 MySQL Cluster 8.0.26 Regards Rapepat | ||||||||||||||||||||
| AWS EKS add-on coredns status as degraded and node group creation failed( is unable to join Cluster) Posted: 12 Sep 2021 09:06 PM PDT I'm trying to create node group on EKS Cluster(region = ap-south-1) but it is failing to join cluster. Health issues : NodeCreationFailure Instances failed to join the kubernetes cluster I found that it may be because AWS EKS add-on(coredns) for Cluster is degraded. I tried to create new Cluster but it shows same status for add-on as degraded . Health issues shows: InsufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas. And in the same region other Clusters with node group are working fine.Their all add-ons are in Active State. I'm creating cluster from console. | ||||||||||||||||||||
| did you know about carpartsnow.ng Posted: 12 Sep 2021 07:55 PM PDT AUTO PARTS SHOP Are you looking for Auto Parts Shop in Nigeria? Auto parts are an essential necessity for any car. But with so many different stores to choose from, how can you be sure that the one you're buying from is reliable and trustworthy? Carpartsnow.com is your best choice if you live in Nigeria or have plans of visiting. We offer a wide selection of spare parts at wholesale prices, making it easy to find what you need without breaking the bank! check out here: https://carpartsnow.ng/pages/auto-parts-shop GET ORIGINAL PARTS WITHOUT THE STRESS. Good things don't come easy they say, but what if you could get good parts for your car and delivered to your doorstep without breaking a sweat and without the risk of losing your money buying the wrong part that doesn't fit or worse- a counterfeit. At Carpartsnow, we are here to make that happen. We stock only original parts and accessories and our system makes shopping for parts easy and less complicated. You can shop with confidence of knowing that our online store is backed by an actual brick and mortar physical location which you can visit to pick up your parts or make inquiries or find resolution to your auto parts issues. We are here to make your auto parts purchases a pleasant experience. WHO WE ARE. Carpartsnow is an online retail website owned by Brakes and Shocks Limited. Brakes and Shocks is a holding company for physical and digital automotive brands in Nigeria including Brakes and Shocks Mobile Mechanics and Partboyz Auto Parts. Since our founding over 10 years ago, we have become synonymous with quality aftermarket auto parts and on-demand car repairs. We bring the same culture of excellent service and genuine auto products to our customers with Carpartsnow online portal for quality auto parts. MORE ABOUT OUR SERVICES You can find the parts you need quickly by browsing our selection of categories or using our search bar. If there's any questions, we offer live chat to assist with your order! At CarpartsNow, customer satisfaction is priority number one. We offer Auto Parts at wholesale prices, making it easy to find what you need without breaking the bank! At CarpartsNow Auto Parts Shop Nigeria we understand that the needs of our customers change over time. We are committed to providing you with Auto Parts at affordable prices while offering a wide selection of quality products for your vehicle! We offer AutoParts for all major brands like BMW, Toyota, Mercedes Benz and Ford. You can find the parts you need quickly by browsing our selection of categories or using our search bar. If there's any questions, we offer live | ||||||||||||||||||||
| Centos server logs "systemd: Started Telnet Server (127.0.0.1:52050)." every minute Posted: 12 Sep 2021 07:18 PM PDT | ||||||||||||||||||||
| Unbound error - unbound.service: Start request repeated too quickly Posted: 12 Sep 2021 04:41 PM PDT I am new using unbound. I have a network 192.168.50.1 to 192.168.50.240. And I'd like to use DoH for non cache data. my conf file: What is wrong in my conf file? Thanks a lot! | ||||||||||||||||||||
| Posted: 12 Sep 2021 03:15 PM PDT I'm trying to devise a simple boot option that would secure erase one or more drives in a computer. Imagine a scenario such as airport security where somebody has the authority to compel you to turn on and unlock a laptop that contains trade secrets. You power on the device and enter a password, but instead of logging into the OS, a script is triggered that executes a secure erase on the boot device. I think the following features would be required or desirable:
I think a UEFI utility might be ideal in fulfilling requirements 1 and 5, but I'm not aware of the existence of such. I know Lenovo has a bootable utility to erase an nvme device, but it boots in legacy mode and requires multiple steps, including a menu, a security code, a reboot, and fineally entering the security code before the erase is executed. The process wouldn't meet the first requirement and would not be quick or subtle enough to be practical in the described scenario. Of course one could set up a dedicated Linux environment similar to the Parted Magic distribution and have a simple erase script executed automatically at boot time or login, but I'd prefer not to dedicate a whole partition to such a utility, and I'm not sure if a secure erase would even run properly on a boot drive in a Linux environment. Any Windows-based secure erase utility I've tried won't work on the boot drive. I've secure-erased drives using Linux bootable USB sticks, but I've never tried it on the Linux boot device itself. This points to another possibility if running Linux as a primary OS on the device, to use the installed OS, but configure a dedicated user account that runs the scuttle script on login. But again, I don't know if this would work on the system boot drive, plus this approach requires unlocking the boot drive, in violation of requirement #3 above. Any suggestions? | ||||||||||||||||||||
| Getting ERR_NAME_NOT_RESOLVED only from MY PC and only from WIFI Posted: 12 Sep 2021 04:29 PM PDT I am setting up the website melius.live and it literally works fine from all my devices using mobile data, but not from WiFi (any WiFi, not just a specific one). However, from anyone I ask to test it, it works for them. I literally wrote the web app and set up the server and am the only one who is unable to access it (unless using mobile data). Can it be related to some DNS settings? Because other people in the same WiFi can access it. However, the issue is on my Mac, iPad, and iPhone. Thank you. | ||||||||||||||||||||
| Passive ftp not working behind nat Posted: 12 Sep 2021 02:49 PM PDT I have a big problem. Let me explain. I have configured two machines, one called "fw" that is the firewall and the other one connected to this one called "server", both are Debian 10 buster systems. The fw machine uses iptables to masquerade the IP. "Public IP": 88.20.100.2, local range: 192.168.150.0/24 This is the configuration of my FTP server, vsftpd to have passive mode Anythin special. It works if I have this iptables enabled on the firewall (enp0s9 = internet, enp0s3 = LAN) My problem is that I want to be able to open the 1000:2000 ports only when the connection es related to the FTP server, not always. I have tried with -m state and -m conntrack but I guess I made something wrong. Any idea? Thanks | ||||||||||||||||||||
| Windows Server Backup Fails: There is not enough disk space to create the volume shadow copy Posted: 12 Sep 2021 05:56 PM PDT We have a brand Dell PowerEdge - Windows Server 2012 R2 is running , The server is an Active Directory Domain Controller - two NTFS partitions-C:\ FOR OS : 400 GB - E:\ for data … I connect 1TB external drive for Windows Server Backup I try to backup windows server but failed The message is: I can successfully backup while only including the System State and OS (C:) items. If I adjust the backup selection to include the Recovery partition, it fails. If I choose to include Bare Metal Recovery which implicitly includes EFI System Partition and Recovery Partition - it fails as well. | ||||||||||||||||||||
| Ubuntu 18.04.4 Zimbra 8.8.15 warning: connect to transport private/smtp-amavis: Connection refused Posted: 12 Sep 2021 05:49 PM PDT After updating my Ubuntu can't send email from Zimbra. I found this error in my logs: | ||||||||||||||||||||
| Posted: 12 Sep 2021 02:52 PM PDT *Edit 1: The error seem to be only with I have a frontend website on The website calls a backend function to register a user at I use Axios to POST the username and password. The browser sends two requests: OPTIONS pre-flight request, and then the POST request. The user is created successfully, however the browser throws an error for the POST request: And indeed it's missing in the response to the POST. Assuming my backend cors file is configured properly, could the issue be from the combination of my Docker + Nginx setup that blocks it or proxy the headers to a wrong place? This is my nginx config: and this is my **Edit 2: The backend is Laravel and it has a CORS middleware that is supposed to take care of it. And in fact it does seem to be working because This is the CORS config file ( | ||||||||||||||||||||
| iSCSI separation from Ethernet via VLAN Posted: 12 Sep 2021 08:41 PM PDT I've set up a small cluster of a few servers along with a SAN. The servers are running Ubuntu 20.04 LTS. Using instructions provided by the vendor (I can't find where I read it before), they suggested that the iSCSI connections between the SAN and the servers should be (or maybe it was "must be"?) separated from any ethernet traffic. Because of this, I've configured two VLANs on our switch -- one for iSCSI traffic and one for ethernet traffic between the servers (which the SAN is not on). So far, it seems fine. Suppose the Ethernet is on 172.16.100.XXX/24 and iSCSI is on 172.16.200.XXX/24. More specifically, the addresses look something like this:
Not surprisingly, I can
What I'm worried about is whether or not I should better separate non-iSCSI traffic from the 172.16.200.X subnet with firewall rules so that port 22 (ssh) is blocked out on all servers. I'm not concerned about the reverse -- the SAN is only on VLAN 200. It doesn't know VLAN 100 exists so it won't suddenly send iSCSI traffic down that VLAN. I'm using the Oracle Cluster Filesystem which seems to use port 7777 -- perhaps I should block all ports on the VLAN so that only port 7777 is used? Does having ethernet traffic on an iSCSI network create problems (either lag or errors?) I should be aware of? Thank you! | ||||||||||||||||||||
| How to upgrade rhel 7.3 to 8.1 using iso cd Posted: 12 Sep 2021 09:20 PM PDT I wish to upgrade Rhel 7.3 to 8.1 by using an iso cd. I mount it to /home/cdrom This iso contains the following directories: BaseOS AppStream RPM-GPG-KEY-redhat-release, and so on I got one repo file called /etc/yum.repos.d/rhel8.repo. This contains: Then I executed yum update but it didn't work. I also tried with baseurl=file:///home/cdrom/BaseOS but there's no results. I got result messages such like 'You could try using --skip-broken to work around the problem' or 'Error: Invalid version flag: if'. What can I do? | ||||||||||||||||||||
| Problems with setting up bonding on Netplan (Ubuntu server 18.04) Posted: 12 Sep 2021 09:02 PM PDT I have a dual port network card that I want to bond both ports and balance the traffic between ports. I want 1 static IP address. I used to ubuntu 16.04 and this worked fine. Im now trying to set up the same thing in netplan and am struggling. My config is below... | ||||||||||||||||||||
| Scheduled Task in Windows Server 2016, run by non-admin Users Posted: 12 Sep 2021 04:04 PM PDT In earlier windows server versions (prior to 2016) it was possible to grant non-admin users the permission to run a scheduled task by doing following steps:
Now in server 2016 this doesn't work anymore. Do you know how to do it? Thank you related post, which didn't get answered, neither helped: Allow non-admin user to run scheduled task in Windows Server 2016 | ||||||||||||||||||||
| Dockerfile cloning from private gitlab with ssh and deploy key Posted: 12 Sep 2021 07:07 PM PDT (EDIT) This problem was happening also from my laptop using root and my user, which could get the greeting when trying to ssh with git user. Then tried the ansible playbook and it raised errors for the repo too. Tried another one and that clones flawlessly. The problem, then, doesn't seem to be with git, docker or ssh, but with the gitlab configuration. On a Dockerfile I am trying to clone private repositories hosted on a company server running gitlab and setup with a non standard ssh port. This is what I expected to run (alongside with some params in ssh config file) Things I've checked already:
RUNning this from the container: Gets the result
But the git clone command gets: | ||||||||||||||||||||
| Win 2012 R2 / IIS 8.5 intermittent Connection Refused Posted: 12 Sep 2021 06:07 PM PDT We suffer from a connection refused problem when the users of our web site try to open it. This problem happens in a random manner, about once or twice a month, and problem continues for a few hours. Also when happening, almost all connections are rejected by connection refused error. but there are successful connections meanwhile.
There is plenty of RAM (more than ~60%) and CPU (more than ~70%) available while this problem happens. Also we checked the network firewall and apparently traffic is passing through network firewall without problem and problem happens at the server level. And we can not even open the web site by doing Remote desktop and trying to open in locally. We checked about exhausted port problem and it seems that is not the problem. The number of SYN packets are high, but its similar to other days when everything is fine. This is one day summery of HTTPERR log: Any help is really appreciated to find the reason why we get connection refused when trying to open web site hosted on this server. | ||||||||||||||||||||
| Posted: 12 Sep 2021 03:12 PM PDT I'm simply attempting to set up a Custom Domain in API Gateway. I have ACM certificate "*.mysite.com.au" that is currently being used to serve a static S3 website out via CloudFront at "beta.mysite.com.au". I wish to create a custom domain for "api.mysite.com.au" with this certificate. However, I'm receiving the following error in the AWS API Gateway console:
I'm not currently using "api.mysite.com.au" in a CloudFront distribution. So I'm lost. Has anyone encountered this issue before? And if so, how may I go about resolving it? Thanks in advance, Strainy | ||||||||||||||||||||
| Posted: 12 Sep 2021 03:04 PM PDT I'm trying to get nginx to log access and error logs. My logs currently have very old content, a mix of logs and gzipped logs. My configuration is: Strangely, despite the I tried setting the owner of the existing There's nothing (relevant) in I have tried a mixture (!) or reloading and restarting nginx after changing the configuration file. Still nothing... How is my configuration incorrect? | ||||||||||||||||||||
| NGINX subdomain with proxy_pass Posted: 12 Sep 2021 04:04 PM PDT I have nginx running as a reverse proxy for a nextcloud server hosted on apache on a different virtual machine. I'd like to be able to access it via cloud.example.com. With my current rules I have to put in cloud.example.com/nextcloud. I have googled, searched, and the closest I got was being able to go to cloud.example.com and it would redirect to cloud.example.com/nextcloud, but I'd like to keep the /nextcloud out of the address bar if possible. Do I need to have a /nextcloud location that does the proxy pass in addition to the /? This is my current nginx.conf: | ||||||||||||||||||||
| Tomcat startup script on RHEL not starting tomcat on reboot Posted: 12 Sep 2021 08:04 PM PDT My tomcat startup script is not starting tomcat on reboot of the Red Hat Enterprise Linux server. I have narrowed it down to the start function: When I reboot the server running /sbin/reboot, the contents of the file that I echo out to are: When I run the tomcat script in /etc/rc.d/init.d as follows: The contents of the file are: * I have also tried using the daemon function -- that didn't work for me either * | ||||||||||||||||||||
| How to modify querystring using URL rewriting? Posted: 12 Sep 2021 05:03 PM PDT I am having very less knowledge in URL rewriting, So not sure weather this can be done or not using URL rewrite? I have a URL like www.test.com/categroy.cfm?categoryid=12&weight=any&brandid=23 For weight parameter: if its value is 'any' i want to remove it from the url . For brandid parameter: if brandid is 'any' remove it else replace with 'filter_brand=value' Outpul like: www.test.com/categroy.cfm?categoryid=12&filter_brand=23 Is it possible ? If yes could anyone please show me an example. I am using IIS. | ||||||||||||||||||||
| iptables to allow input and output traffic to and from web server only Posted: 12 Sep 2021 03:04 PM PDT I have an Elastic Search server which seems to have been exploited (it's being used for a DDoS attack having had NO firewall for about a month). As a temporary measure while I create a new one I was hoping to block all traffic to and from the server which wasn't coming from or going to our web server. Will these iptables rules achieve this: The first rule is tried and tested but obviously wasn't preventing traffic coming from my server to other IP addresses so I was hoping I could add the second two rules to full secure it. | ||||||||||||||||||||
| PAM LDAP configuration for non-local user authentication Posted: 12 Sep 2021 09:02 PM PDT I have a requirement to allow non-local user accounts to be logged in via LDAP authentication. Meaning, the user that is trying to login is allowed access, if the user account exists in LDAP server database, there is no need to have local user. I'm able to achieve this if I run NSLCD(/usr/sbin/nslcd). Would like to know if we can do this with any configuration in /etc/pam.d/sshd or /etc/pam_ldap.conf without the use of running NSLCD. Please let me know your suggestions Thanks, Sravani | ||||||||||||||||||||
| How to filter TCP packets based on flags using Packet Filter Posted: 12 Sep 2021 05:03 PM PDT Well, I didn't know exactly how to ask this question, but I know that you can use the keyword flags to especify which flags you want to filter. According to the documentation of the Packet filter:
So, I understood the example and why the packet with the flags S and E can pass (because the E flag is not considered due to the mask SA) and why the packet with only the Ack flag can't pass the firewall. What I didn't understand is why the packet with the flags S and A can't pass the rule S/SA, if the flag S is "on" in the packet header. Maybe the documentation is ambiguous? Sorry if this is a stupid question or an english misunderstood. I imagine that it can only pass if it MUST HAS ONLY the flag S. In set arithmetic would be something like this: flag(s) must be 'on' in the header -> flag(s) pertains to the masked subset [pf doc] only the flag(s) must be 'on' in the header -> flag(s) is egual to the masked subset [what I understood from the example given] Thanks in advance! | ||||||||||||||||||||
| Concatenating files to a virtual file on Linux Posted: 12 Sep 2021 06:06 PM PDT On a Linux system, is there any way to concatenate a series of files into one exposed file for reading and writing while not actually taking up another N bytes of disk space? I was hoping for something like mounting these files via loopback/devmapper to accomplish this. I have a problem where there are split binary files that can get quite large. I don't want to double my space requirements with massive disk IO just to temporarily read / write contents from these files by I found this project here, but it seems to have a very specific use case and also depends on perl | ||||||||||||||||||||
| Error 503 Service Unavailable Varnish Posted: 12 Sep 2021 07:07 PM PDT So I setup a new cloud based instance with Ubuntu 12.04, with nginx, php5-fpm and varnish. Before I installed and configured Varnish, the website worked fine, virtual hosts worked. After setting up varnish I'm getting Error 503 Service Unavailable now. My nginx conf looks like this: /etc/default/varnish looks like the this: /etc/varnish/default.vcl looks like the following: Checking varnishlog I see this: | ||||||||||||||||||||
| save Performance Monitor settings Posted: 12 Sep 2021 06:07 PM PDT I have added several counters to Window 2008 Performance Monitor to monitor web application. When I restart server or close Server Manager console I loose all added monitor. I do not see a way to save and later load counters and every time adding the same counters are boring and takes some time. How to save counters? | ||||||||||||||||||||
| Troubleshooting 'Could Not Start' scheduled task error: Posted: 12 Sep 2021 08:04 PM PDT I'm trying to run snapshot on my server to back up the drive onto a local NAS server. I'm currently using this on a Win2k, Win2k3, and Win2k8 servers. Both the Win2k and Win2k8 servers are correctly backup up the data, but the Win2k3 is returning a:
error. I use a batch file to run snapshot, and it's run using a Domain Admin account. Here's the specific Batch code: Note blat is a simple program to send email from the command window I've tried following this KB article found from this answer to a similar problem with no success. I've also tried this solution as well, but alas still no success. My last result was:
which means that:
(from this KB article) but it's not successfully completing as it's not backup up the drives. Not sure where to go from here. Any suggestions? |
| You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
| Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States | |

No comments:
Post a Comment