Recent Questions - Server Fault |
- DFS Replication slow to a single server
- DHCP issue in Windows Server 2019
- How to implement dynamic DNS with Google Cloud?
- Kyocera ECOSYS M2640idw print 4 times instead of 1 using cups+raspberrypi
- How to trigger an event after saving a chosen csv file in Google Cloud Storage bucket
- Apache 2.4.x Reverse Proxy subdirectory and ports issue?
- Unable to deploy and use Google Vision On-Prem
- Split tunnle on OpenVPN cummunity for everything except VPNs internal network
- Issues starting Kube-scheduler [ Kubernetes the hard way ]
- Pass variable value from one build step to other in jenkins job
- Best RAID and file system for all SSD nginx web server - large static files in one box
- exim interface setting does not work
- Reading only the metadata of a file in a Google Cloud Storage bucket into a Cloud Function in Python (without loading the file or its data!)
- how to configurate ipv6_autocon = No is persistent
- How to catch access/request to an (non-existent) directory under a base path?
- If I setup a 'Passwordless SSH connection' as root user, will it be applied to all other users on the server?
- Istio - Prometheus - HPA Stack not communicating [ HPA could not calculate the number of replicas ]
- Nginx wildcard subdomain
- Enable Host Guest Domain Resuloution
- Totally isolate two interfaces -Linux
- CentOS 7 , OpenVPN Server Radius Plugin
- Access Denied when mounting Kerberised NFS v4 Share
- ansible: why I cannot use {{ ansible_facts['ansible_distribution_release'] }} in playbook
- Is there a CloudWatch metric that corresponds to ALB data transfer usage/cost?
- How can I configure haproxy to put two frontends to access owa online?
- ADFS: Convert SAML Assertion to OAuth Token?
- How do I secure the access token, on Linux, to remote, automated secrets stores like Hashicorp Vault?
- JBoss EAP 6.2 on RHEL 6: ./bin/init.d/jboss-as-standalone.sh hangs while calling via SSH
- PHP Files being cached by unknown entity
- Setup squid3 proxy server on linux server with 2 ethernet ports
DFS Replication slow to a single server Posted: 20 Jan 2022 04:58 AM PST This is a proper head-scratcher... A customer has a domain with eight DCs. Two (including the PDC) are Hyper-V VMs in our datacentre, four are vSphere VMs at national site offices (not RODCs) and the other two are also vSphere VMs in a 3rd-party datacentre. SYSVOL replication is all but instantaneous between the PDC (DC1001) and the site DCs, yet between DC1001 and the other Hyper-V DC (DC1002), replication is taking a couple of hours. I've checked AD Sites and Services to ensure that all the links are in place and I can see direct connections between the two Hyper-V boxes. We've put the two Hyper-V boxes on the same host to see if there was a Hyper-V networking problem, but slow replication is persisting. I've run a DFS Replication heath check report and the results are a bit confusing to say the least... Of all eight DCs, none of them have any backlogged receiving transactions yet all of them except DC1002 have over 300 backlogged sending transactions. DC1002 has no backlog at all and this is the "slow" node in the web. How can this be? DFSR Diag commands (with the backlog exception) report everything to be hunky-dory yet there must be an explanation as to why replication is taking hours to show on DC1002. I'm no Hyper-V expert, so there may well be something I've missed there. There appears to be no file replication taking place, just SYSVOL. There are also associated entries in Event Viewer > Applications and Services Logs > DFS Replication I've looked up this error in relation to DFS replication, but there's a myriad of solutions for all kinds of problems so this seems like an unassailable minefield. The RPC service is running on all hosts (Auto startup), there are not firewall rules blocking this and general network connectivity is fine throughout. Any advice or experience with this problem would be greatly appreciated!!! |
DHCP issue in Windows Server 2019 Posted: 20 Jan 2022 04:41 AM PST In a virtual machine on VirtualBox with Windows Server 2019 (named 'Server2019') as PDC of a domain called 'infordidactica.local', starting the DHCP configuration, a pupil of mine came with this situation: a DHCP server that doesn't exist and a server that exists (server2019.infordidactica.local) but doesn't show up in the DHCP servers, even when expressely added. DHCP Problem Any help would be very apreciated, I'm stuck here. Thanks in advance, Paulo |
How to implement dynamic DNS with Google Cloud? Posted: 20 Jan 2022 04:38 AM PST I have the following infrastructure: How can I implement this using Google Cloud? I managed to create client-id.frontend.com using Cloud Run domain mapping, but how can I allow the client to have Thank you. |
Kyocera ECOSYS M2640idw print 4 times instead of 1 using cups+raspberrypi Posted: 20 Jan 2022 04:01 AM PST I installed an ecosys printer M2640idw on a raspberry pi (raspbian 10) using the PPD provided by the manufacturer, but when I send one copy it prints four copies. I also tried a generic ppd (Generic PCL 6/PCL XL) and it does the same thing. I attach the PPD and the cups error_log. PPD: https://drive.google.com/file/d/1uK7ynzNxPicHXTc8D-OEZC_BEA4TxH_Z/view?usp=sharing cups error_log: https://drive.google.com/file/d/1j47ioan9N1oWrw73kr0YaGuLohJVdISt/view?usp=sharing Thanks for your help. |
How to trigger an event after saving a chosen csv file in Google Cloud Storage bucket Posted: 20 Jan 2022 03:57 AM PST Trying to make a synchronous pipeline, I need to copy a csv file from Google Cloud Storage after it has been saved in Google Cloud Storage. The copy job does not have to be triggered right after the saving, it can also happen within some time frame at least. It just may not happen before the file has been saved. Therefore, either a trigger event or a cronjob are possible, or you may suggest something else. How can I trigger copying a chosen csv file after it has been saved in Google Cloud Storage? Can I use a Cloud Function to do the copy job or are there other ways? |
Apache 2.4.x Reverse Proxy subdirectory and ports issue? Posted: 20 Jan 2022 03:57 AM PST I have 3 different applications deployed on the same server. Each on different ports
This is an apache config file test.conf API is working fine at http://test.com/api URL. The website is also working fine at http://test.com URL. The problem occurs when I access http://test.com/admin URL. It shows the following error in browser:- with status code 400 Bad request and If I remove the admin panel from reverse proxy and create another vhost file with a simple configuration like below:- This is another apache vhost config file test-admin.conf then other URL stops working showing 404 Not Found error. Note:- www.test.com is just to represent the actual domain name or IP. |
Unable to deploy and use Google Vision On-Prem Posted: 20 Jan 2022 03:53 AM PST After purchasing/subscribing to the Vision On-Prem OCR from Marketplace, I have configured and deployed the application on a GKE cluster. The automatic deployment seems to have encountered an error. Please find the attached screenshots. Also after successful deployment, we need support in order to access the application. The documentation page is no longer accessible - https://cloud.google.com/vision/on-prem/priv/docs OCR-Service-Deployement-Failed OCR-Service-Deployement-Failed |
Split tunnle on OpenVPN cummunity for everything except VPNs internal network Posted: 20 Jan 2022 03:46 AM PST As I had very short time to set remote access, I have set up my OpenVPN server using the script which is available via github. The downside of this script is that all traffic is pushed via that VPN server, so if I want to browse anything on the internet or download "heavy" data, it uses all traffic from that VPN server as it pulls all the data from there, instead of that specific website -> my router/ISP -> my PC I have found numerous articles but they are all confusing to follow, as many of them has some custom things that they want to implement. What I currently have: My PC from home: 123.123.123.123/30 My VPN server: 223.223.223.223/23 my internal tunel interface: 10.5.0.1/24 internal server network: 223.223.223.0/23 (yes, it is unfortunately public IP range same as the server). The client ovpn config looks like this: My question is:
How my config should look like to be able to achieve that? Thank you in advance. |
Issues starting Kube-scheduler [ Kubernetes the hard way ] Posted: 20 Jan 2022 03:13 AM PST I am trying to setup kubernetes cluster the hardway by following guide from kelsey hightower kubernetes-the-hard-way kubernetes-the-hard-way After setting up the kube-scheduler when I start the scheduler I am seeing the following error :- Jan 20 10:20:01 xyz.com kube-scheduler[12566]: F0120 10:20:01.025675 12566 helpers.go:119] error: no kind "KubeSchedulerConfiguration" is registered for version "kubescheduler.config.k8s.io/v1beta1" Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Main process exited, code=exited, status=255/n/a Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Unit entered failed state. Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Failed with result 'exit-code'. Jan 20 10:20:06 xyz.com kube-scheduler systemd1: kube-scheduler.service: Service hold-off time over, scheduling restart. My Kube-scheduler.yaml inside /etc/kubernetes/config looks like this.Can somebody please provide some pointers to what is going on or what am i missing? My kube-apiserver and Kube-controller manager are active. |
Pass variable value from one build step to other in jenkins job Posted: 20 Jan 2022 03:05 AM PST I want to pass variable value from one build step that is from 'execute shell' to 'send files or execute commands over SSH" my script in Execute shell* is: send files or execute commands over SSH |
Best RAID and file system for all SSD nginx web server - large static files in one box Posted: 20 Jan 2022 03:03 AM PST I need to build a server for serving large static files using nginx, that requires a total 40TB of disk space. The total download bandwidth is 30gbps at peek, for 15k connections. Actually the workload is unknown, so I decided to use all SSD file system in an HP DL380 G8 or G9 server with 2 dual 10g NIC card (4x 10g link bonded). Is there any best practice for using SSD's in this setup? For example, hardware raid or software raid? zfs, or xfs, or ... ? RAID5, RAID6? Can I use 12x SSDs in one raid setup? |
exim interface setting does not work Posted: 20 Jan 2022 02:57 AM PST we have a exim server with two IP's. We create a script to change the interface for sending mails. We add the second interface settings on remote_smtp section. When we check the email sent, the interface is always the same, the primary interface. Is there other exim setting that could be change the interface? Thanks, Best regards. |
Posted: 20 Jan 2022 03:03 AM PST I need something like Cloud Storage for Firebase: download metadata of all files, just not in angular but in Python and just for a chosen file instead. The aim is to return this information when the cloud function finishes with the I have found Q/A's on loading a file or its data into the cloud function
to extract data stats into the running Cloud Function from the external file. Since I do not want to save the large file or its data in memory at any time only to get some metadata, I want to download only the metadata from that file that is stored in a bucket in Google storage, meaning timestamp and size. How can I fetch only the metadata of a csv file in a Google Cloud Storage bucket to the Google Cloud Function? |
how to configurate ipv6_autocon = No is persistent Posted: 20 Jan 2022 01:58 AM PST I modify the line "IPV6_AUTOCONF=no". When we try to do this manually, the change is lost after a network restart/reboot. How do we configure the system so this change is persistent |
How to catch access/request to an (non-existent) directory under a base path? Posted: 20 Jan 2022 01:57 AM PST I would like to reproduce the autofs capability to detect the access to a sub-path of a base path and call the corresponding handler. The motive is that autofs cannot set (nor keep) the shared or rshared property of the mountpoint. While systemd have the path,mount and automount type of units and i could use generators, i failed to find a way to catch the request to an non-{existent,mounted yet} directory under a base path e.g: if the request is to anything under /cvmfs/unpacked.cern.ch/some_other_dir_in_hierarchy to catch that the request is relative to /cvmfs and call the mount handler for /cvmfs/unpacked.cern.ch Is there a solution to this? (and it should be available under centos 7) |
Posted: 20 Jan 2022 02:34 AM PST Hi I'm new to the concept of SSH & password-less authentication. I'm trying to setup password-less SSH connection between two servers A & B, using SSH-keygen. If I generate the keys on "Server A" as "root" user, can all the other users on "SERVER A" use the password-less SSH connection (or) Do I need to create separate keys for each and every user? I'm trying to setup password-less SSH connection for a set of specific users including root user. |
Istio - Prometheus - HPA Stack not communicating [ HPA could not calculate the number of replicas ] Posted: 20 Jan 2022 02:06 AM PST I have cluster with 1 control panel and 2 nodes. Istio is installed as Service Mesh. I do request management via istio ingress. I want it to automatically scale by sharing metrics between Kubernetes HPA and istio prometheus, but I couldn't. My pods on kube-system My pods on istio-system Prometheus UI result : Metrics server response; here in my HPA definition kubectl top pods result HPA Yaml. I have concerns about where I went wrong or if I was walking on the right path. First post I'm excited for answers. I hope I explained myself correctly. Thanks |
Posted: 20 Jan 2022 03:09 AM PST I have setup my nginx config file as: browsing through example.com and www.example.com is fine, but when I use some subdomain like a.example.com or b.example.com I get a message like "301 moved permanently" and I am redirected back to example.com Here is the actual file; |
Enable Host Guest Domain Resuloution Posted: 20 Jan 2022 02:08 AM PST Situation: Problem:
Solutions that I have tried: 1.) First, I tried the obvious. I edited my 2.) Next, I tried installing avahi-daemon on the guest server as follows: Does anyone know how I can get my vbox domain names visible to my host? thanks Update @Gaétan RYCKEBOER Advice below, revealed something useful. when I ran It seems that This is what I need to correct. |
Totally isolate two interfaces -Linux Posted: 20 Jan 2022 04:53 AM PST I'm a bit embarrassed but I need your help. I have three interfaces on a virtual machines. I want to completely isolate my interfaces between them. I created one route table for each interface: But when I try to telnet or ping or whatever from one interface to another one, all the traffic go through the loopback. Is there a way to correct that? |
CentOS 7 , OpenVPN Server Radius Plugin Posted: 20 Jan 2022 04:51 AM PST On my new openvpn server install radius plugin can not read client status. It worked on previous installation, now all things are the same, but not working. OpenVPN Server Conf: Client: Radius Pluginn: On server log it shows this: RADIUS-PLUGIN: BACKGROUND ACCT: No accounting data was found for user01 |
Access Denied when mounting Kerberised NFS v4 Share Posted: 20 Jan 2022 03:09 AM PST I want to mount an NFS4 share, but with Kerberos security enabled. This is my setup:
So as I'm still struggling with Kerberos, that is how I tried to archieve my goal: Chapter I: Setup 1- Put both machines in the same Realm/Domain (This has already been set up by others and works) 2- Created two users (users, not computers!) per machine: nfs-nfsv4client, host-nfsv4client, nfs-nfsv4test and host-nfsv4test After the creation I enabled AES256 Bit encryption for all of the accounts. 3- Set a service principal for the users: I did this for all 4 users/principals. 3- Created the keytabs on the Windows KDC: So after that I had 4 keytabs. 4- Merged the keytabs on the server (and client): The file has 640 permissions. 5- Exported the directories on the server; this has already worked without kerberos. With Kerberos enabled, the export file looks like this: Running exportfs -rav works: ...and on the client I can view the mounts on the server: 6a- the krb5.conf has the default config for the enviroment it's was set up for and I havn't changed anything: 6- Then I set up my sssd.conf like this, but I havn't really understood what's going on here: 7- idmap.conf on both machines: 8- And /etc/default/nfs-common on both machines: 9- Last but not least, nfs-kernel-server on the server: 10- Then, after rebooting both server and client, I tried to mount the share (as root user): But sadly, the mount doesn't work. I don't get access. On the first try, it takes quite long and this is the output: Chapter II: Debugging For a more detailed log, I ran on the server but I don't get really that much logs. However, when trying to mount, syslog tells me that: As this didn't really help me at all, I recorded the traffic with tcpdump, which gives me this: (I redacted the real ip addresses) So the interesting part here is the Auth Bogus (Seal broken)? Is there really something bad or is it just the error which appears when something is wrong? I couldn't find anything helpful about this error on the web. So to come back to Kerberos itself, the keytab seems to be ok: When trying to test the keytab file, it seems to work: But on this page it's stated that the keytab should be tested with which resolves to which doesn't work as no key was found for Another log I found on the mounting client machine (in messages): It's a lot of stuff, but I can't find the meaning of error -13, except that it's Permission Denied. Chapter III: The question The principals are there in the keytab. So when the client asks the server about the NFS share and tries to access it, both should have the keys to interact with each other. But for some reason it doesn't work. May it be because of the assignment of the principals to the user accounts? How can I get this to work? How do I get better infos when debugging? Sorry for the wall of china of text. PS. I mainly followed this tutorial. It seemed like a perfect match for my enviroment.. |
ansible: why I cannot use {{ ansible_facts['ansible_distribution_release'] }} in playbook Posted: 20 Jan 2022 04:56 AM PST I have ansible task run on localhost like this I wish to use variable
I tried to use Then I thought I should only access the facts directly, not access it as a key of the variable ansible_facts, but then I read the official document, I see use cases like
It make me suspicious there is something wrong about my understanding of ansible variables I've tried to not quote I run command below thus proved there did have an attribute named Any help will be appreciated udpate: I use instructions show below and find out the documents says
It seems I can access facts in the mainspace without the ansible_ prefix |
Is there a CloudWatch metric that corresponds to ALB data transfer usage/cost? Posted: 20 Jan 2022 02:47 AM PST I have an Application Load Balancer whose data transfer cost I want to monitor. In Cost Explorer, I can filter on usage type "DataTransfer-Out-Bytes", and see how many GB of data it is sending, and how much that costs. However, it only shows the total for each day, and the data is delayed by several hours. In order to see how the amount of traffic is affected by changes I make, I'd like to see that same number in CloudWatch, but I can't find any corresponding metric. The Per-AppELB "ProcessedBytes" metric sounded promising, but that number is slightly more than half the number I see in Cost Explorer. (My best guess is that TLS handshake overhead isn't included.) Is there any metric or combination of metrics that matches what I end up getting billed for? |
How can I configure haproxy to put two frontends to access owa online? Posted: 20 Jan 2022 02:07 AM PST I am facing a problem with HAPROXY on an Ubuntu 16.04 server when redirecting to show OWA on the internet. I have a domain, and I installed exchange server 2013 on windows server 2012 r2. I need to use a second frontend with tcp for OWA on both 443 and 80 ports. The problem is that OWA appears sometimes and after refresh the page it gives error or another site of mine with different CA, because of the old frontend haproxy-in (mode http). I have LetsEncrypt for all my sites assigned to port 443. Please, I need a solution to open OWA and the other sites with. This is my haproxy configuration file from the first frontend: |
ADFS: Convert SAML Assertion to OAuth Token? Posted: 20 Jan 2022 03:01 AM PST We have Microsoft Active Directory Federation Services (ADFS) as our authentication/federation provider. We use it for performing identity federation via SAML to several external vendors, SaaS providers, etc. In addition, we have several vendors that only support OAuth, so we have configured integrations with those vendors using ADFS 2016's OAuth support. As such, we are able to generate both SAML assertions and OAuth access tokens, as needed. Now we have run into a situation where Vendor A (configured for SAML auth) needs to make a RESTful service call to Vendor B (configured to require OAuth tokens). Is there a way to convert an ADFS-generated SAML assertion into an ADFS-generated OAuth token? Given that both credentials are generated by ADFS, I would think that ADFS would have a way of performing the conversion. Is there an endpoint where I can POST a SAML assertion and get back the OAuth token in return? Any help would be GREATLY appreciated!! |
Posted: 20 Jan 2022 04:00 AM PST There seems to be a bit of a "chicken and egg" problem with the passwords to the password managers like Hashicorp Vault for Linux. While researching this for some Linux servers, someone clever asked, "If we're storing all of our secrets in a secrets storage service, where do we store the access secret to that secrets storage service? In our secrets storage service?"‡ I was taken aback, since there's no point to using a separate secrets storage service if all the Linux servers I'd store the secrets on anyway have its access token. For example, if I move my secrets to Vault, don't I still need to store the secrets to access Hashicorp Vault somewhere on the Linux server? There is talk about solving this in some creative ways, and at least making things better than they are now. We can do clever things like auth based on CIDR or password mashups. But there is still that trade-off of security For example, if a hacker gains access to my machine, they can get to vault if the access is based on CIDR. This question may not have an answer, in which case, the answer is "No, this has no commonly accepted silver bullet solution, go get creative, find your tradeoffs bla bla bla" I want an answer to the following specific question: Is there a commonly accepted way that one secures the password to a remote, automated secrets store like Hashicorp Vault on modern Linux servers? Obviously, plaintext is out of the question. Is there a canonical answer to this? Am I even asking this in the right place? I considered security.stackexchange.com, too, but this seemed specific to a way of storing secrets for Linux servers. I'm aware that this may seem too general, or opinion based, so I welcome any edit suggestions you might have to avoid that. ‡We laugh, but the answer I get on here may very well be "in vault". :/ For instance, a Jenkins server or something else has a 6-month revokable password that it uses to generate one-time-use tokens, which they then get to use to get their own little ephemeral (session limited) password generated from Vault, which gets them a segment of info. Something like this seems to be along the same vein, although it'd only be part of the solution: Managing service passwords with Puppet |
JBoss EAP 6.2 on RHEL 6: ./bin/init.d/jboss-as-standalone.sh hangs while calling via SSH Posted: 20 Jan 2022 02:07 AM PST I'm using jboss-as-standalone.sh to manage JBoss EAP standalone as a service. I can start/stop the service with "service jboss-as-standalone.sh start/stop" while I'm on a terminal. But I would like to start JBoss from outside the server via SSH using our Continuous Deployment Infrastructure. Therefore I'm issueing a command like this: The server starts up normally but SSH hangs. It seems it isn't able to close the connection because of the background job forked by this command in the script: Is there any other possibility to start JBoss as a service which works with notty SSH connections as well? Best regards Jan |
PHP Files being cached by unknown entity Posted: 20 Jan 2022 03:01 AM PST I'm hitting a weird cache issue on my server, the project I am working on doesn't have any caching enabled at this time, but the server it self has APC installed (which was set to cache everything by default, this has been disabled now). The problem is, my old code is running still, and I don't know how to get the amended code to trigger. I have tried deleting the file entirely, this makes my project error with "missing file" as it should, but once I upload my file (new version), it starts serving up the old version of the file again. I've uploaded a uniquely labeled file with I have also commented out APC from loading with PHP, but it still served old files, so I am wondering if there's something underlying that is causing this aggressive caching. Apache2, PHP, APC etc is all loaded up using Aptitude on Debian Wheezy PHP 5.4.4-14+deb7u3 (running under mod_php) Apache 2.2.22 Between each config change and disabling APC I did a complete apache restart. I've checked the apache2 modules list, no cache modules are loaded up, there are also no services such as varnish etc running. Update Did some additional testing, added some html output before the The file that isn't updating is being included with The problem is with tryign to use HTML2PDF to generate a .pdf file after form submission
The new version of the file uses |
Setup squid3 proxy server on linux server with 2 ethernet ports Posted: 20 Jan 2022 04:00 AM PST I need to setup squid3 proxy server on my linux machine with 2 ethernet ports(eth0 and eth1). eth0 has an IP address of 192.168.1.2 assigned by a router which provides internet to the system. eth1 is connected to a switch. I need squid3 to serve the switch through eth1. How should I configure eth1? I don't need the configurations for squid3. What should I do? |
You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment