clear nvraM ML350 G6 Posted: 16 May 2021 10:09 PM PDT I just wanted to reset the nvram of my server but do not know how to operate the system maintenance switch please help. The switches are tiny I don't know what to useenter image description here |
SQL Server Agent is not visible for CloudSQL instance of SQL Server (GCP) Posted: 16 May 2021 10:08 PM PDT When I Connected to CloudSQL using SSMS, I do not see SQL Server Agent option there. Is there anything to do with role assignment to login? |
How to check ip-port mapping after adding a port-forwarding rule by firewall-cmd Posted: 16 May 2021 08:23 PM PDT In CentOS7, I have made a tcp relay by command "firewall-cmd --add-forward-ports ...". However, now I need to know the mapping rules between incoming ip and the port assigned to the ip by NAT as the new source port. Is there a translation table somewhere in my system or how can I get the in-time-mapping? Now the only possible way I have thought out is to add some LOG rule in iptables where a forwarding is accepted. |
Cryptographic Services keeps restarting even when the service has been disabled. (Windows 7 Professional x64) Posted: 16 May 2021 08:16 PM PDT As the title reads. No recovery options are responsible for the relaunching of the service, and the start up type is switched from 'Disabled' to 'Manual' each time. How do I PERMANENTLY disable this service? |
Apache .htaccess set a header if Request URI does not exactly match a desired value Posted: 16 May 2021 08:11 PM PDT Goal: using a .htaccess file in /directory/, if request URI does NOT exactly match "/directory/" then set a header. For example, the header SHOULD be set if the request URI is "/directory/index.php", "/directory/?something", or "/directory/index.php?something", but should NOT be set if the request URL is exactly "/directory/" Apache/2.4.41 so the IF directive is available. Main two methodologies I've attempted: <If "%{REQUEST_URI} != '/directory/'"> Header always set X-Robots-Tag "noindex" </If> SetEnvIf Request_URI "/directory/" isgood Header always set X-Robots-Tag "noindex" env=!isgood I've tried a number of variations, such as using "early" on the header, as well as trying to match just "/" based on past experience that when working in an .htaccess context it might not "see" the full URI. All attempts result in either the header ALWAYS being set or NEVER being set. (Yes, mod_header and mod_setenvif are enabled) I rigged this up for testing, trying a few different methods at once with different headers: <If "%{REQUEST_URI} != '/'"> Header always set X-Test1 "test1" </If> <If "%{REQUEST_URI} != '/files/crackypics/2005-January-alt/'">i Header always set X-Test2 "test2" </If> SetEnvIf Request_URI "/" test3 Header always set X-Test3a "test3a" env=!test3 Header always set X-Test3b "test3b" env=test3 SetEnvIf Request_URI "/files/crackypics/2005-January-alt/" test4 Header always set X-Test4a "test4a" env=!test4 Header always set X-Test4b "test4b" env=test4 Regardless of the actual request URI, I always get test1, test2, test3b, and test4b. |
Transparent proxy with iptables Posted: 16 May 2021 07:18 PM PDT So i have Been Trying to achieve a scenario but i am not being able to fix that hope someone will come up with the solution so basically what is want is I have two server publicly accessible So what I want is I want if any client tries to request a service lets say a udp service i want it to be redirected to server B SO the configuration i have is iptables -t nat -I PREROUTING -d <This pc> -p udp --dport 9987 -j DNAT --to-destination <Redirecting server> iptables -A FORWARD -d <redirecting server> -j ACCEPT sysctl -p net.ipv4.ip_forward = 1 I can access the other server from outside and from my box there is no limiting policy in the redirecting server or just say destination server The packet are being matched in the iptables rule Thank you |
Puppet 5.5.22, dnfmodule reset Posted: 16 May 2021 06:50 PM PDT I've been trying to figure out how to convert dnf module reset php dnf module install php:remi-7.4 to a stanza in a puppet module for several hours without any success. Has anyone figured out how to do that? The doco on the puppet website is somewhat lacking, shall we say. |
ESXi VM unable to communicate with physical LAN Posted: 16 May 2021 06:37 PM PDT I have set up a Windows 2019 VM on a ESXi 7.0 host with basic network topology as follows: VM connected to vSwitch0 (via a vNIC added to the VM) with uplink to the only physical NIC on the host. The host & another physical PC running Windows 10 is connected to a router Router (192.168.1.1) Host (192.168.1.2) PC (192.168.1.3) VM (192.168.1.4 - statically set on the vNIC on the VM) Physical NIC is a USB ethernet adapter named by ESXi as vusb0 From the VM, for some unknown reason, I am unable to ping the physical PC (and the from the physical PC - I cannot ping the VM) However, I can ping the host from VM and physical PC. Am I missing any obvious - to get the VM to communicate with devices outside its vSwitch0 ? Do I have to set an IP range on the vNIC itself - if so, how would I go about doing this ? Any suggestions are appreciated |
Connection refused on SMTP port 465 Posted: 16 May 2021 06:01 PM PDT I'm trying to open the 465 port to secure my postfix SMTP. But when I try to open, I execute telnet mail.example.com 465 Trying my.server.public.ip ... telnet: Unable to connect to remote host: Connection refused I try to open it with the command iptables -I INPUT -p tcp -m tcp --dport 465 -j ACCEPT How can I solve this problem? Thanks in advance :) |
ntpdate succeeds but ntpq fails against Windows DC Posted: 16 May 2021 05:22 PM PDT |
django.db.utils.OperationalError: could not connect to server: Connection refused Posted: 16 May 2021 05:36 PM PDT I found a Django project and failed to get it running in Docker container in the following way: git clone https://github.com/hotdogee/django-blast.git $ cat requirements.txt in this files the below dependencies had to be updated: - kombu==3.0.30
- psycopg2==2.8.6
I have the following Dockerfile: FROM python:2 ENV PYTHONUNBUFFERED=1 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ For docker-compose.yml I use: version: "3" services: db: image: postgres volumes: - ./data/db:/var/lib/postgresql/data environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db Next, I ran into this error: $ docker-compose build ... web_1 | System check identified no issues (0 silenced). web_1 | Unhandled exception in thread started by <function wrapper at 0x7f265ed26850> web_1 | Traceback (most recent call last): web_1 | File "/usr/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 223, in wrapper web_1 | fn(*args, **kwargs) web_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 112, in inner_run web_1 | self.check_migrations() web_1 | File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 164, in check_migrations web_1 | executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 19, in __init__ web_1 | self.loader = MigrationLoader(self.connection) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 47, in __init__ web_1 | self.build_graph() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 180, in build_graph web_1 | self.applied_migrations = recorder.applied_migrations() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations web_1 | self.ensure_schema() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 49, in ensure_schema web_1 | if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()): web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 162, in cursor web_1 | cursor = self.make_debug_cursor(self._cursor()) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor web_1 | self.ensure_connection() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection web_1 | self.connect() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__ web_1 | six.reraise(dj_exc_type, dj_exc_value, traceback) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection web_1 | self.connect() web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect web_1 | self.connection = self.get_new_connection(conn_params) web_1 | File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 172, in get_new_connection web_1 | connection = Database.connect(**conn_params) web_1 | File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 127, in connect web_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync) web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused web_1 | Is the server running on host "127.0.0.1" and accepting web_1 | TCP/IP connections on port 5432? What did I miss? Thank you in advance |
Tacacs+ AAA Error Posted: 16 May 2021 04:43 PM PDT I have Windows Server 2012 running AD and tacacs +. I have added config files for tacacs +. Some users work without any problem. But some part doesn't work out. For the test, I created a new account in AD in the desired group. Then I tried to test this user through the command tactest , but I got the following error: Command Pass status = False, Message=AAA Error. Contact the administrator. error Again, most of the users work without any errors. Whats does it mean AAA Error |
How to Run/Access different web directory for virtual host other than default root? Posted: 16 May 2021 08:08 PM PDT I made the virtual host setup in Ubuntu 20.04 for accessing virtually to my web projects. I have followed the necessary steps properly, but since I wanted to work more portable and access my web projects from outside rather than default document root, so I set my own media device path (/media/akin/7114-BB32/htdocs/) instead of the default document root (/var/www/html/). But when I wanted to access the my domain (htdocs.com) from the browser, it did not work, I could not view my own index page. It opened the index of a different web page. I could not find exactly what I was doing wrong or missing. What should I do for it? And here is my domain.conf(htdocs.conf): <VirtualHost *:80> ServerAdmin webmaster@htdocs.com ServerName www.htdocs.com ServerAlias www.htdocs.com DocumentRoot /media/akin/7114-BB32/htdocs <Directory /media/akin/7114-BB32/htdocs> Options Indexes FollowSymLinks AllowOverride all Require all granted </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Also apache2.conf: PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn IncludeOptional mods-enabled/*.load IncludeOptional mods-enabled/*.conf Include ports.conf <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Options +MultiViews AllowOverride All Require all granted </Directory> <Directory /media/akin/7114-BB32/htdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Require all granted </Directory> AccessFileName .htaccess <FilesMatch "^\.ht"> Require all denied </FilesMatch> LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent |
Exposing simple pod using kubernetes ingress Posted: 16 May 2021 04:11 PM PDT Hi I'm learning kubernetes and I'm having trouble exposing the service. I want to route traffic to my cluster from HAProxy. I'm using my own bare-metal server. EDIT: I've created also ingress controller using: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ Now, when I describe my ingress I can see IP Address of worker machine but still I've got Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) And no idea how to get access to my pod... example config: deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: apache labels: app: apache-test spec: replicas: 1 selector: matchLabels: app: apache-test template: metadata: labels: app: apache-test spec: containers: - name: apache image: httpd ports: - containerPort: 80 service.yaml apiVersion: v1 kind: Service metadata: name: apache-test-service spec: selector: app: apache-test ports: - protocol: TCP port: 80 targetPort: 80 name: http ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: apache-test-ingress spec: rules: - host: apache-test.com http: paths: - pathType: Prefix path: "/" backend: service: name: apache-test-service port: number: 80 What's wrong? describe ingress: Name: apache-test-ingress Namespace: default Address: 192.168.6.72 Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- apache-test / apache-test-service:80 (10.44.0.1:80) Annotations: <none> Events: <none> describe service: Name: apache-test-service Namespace: default Labels: <none> Annotations: <none> Selector: app=apache-test Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.104.63.167 IPs: 10.104.63.167 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.44.0.1:80 Session Affinity: None Events: <none> describe controller: Name: ingress-nginx-controller-55bc4f5576-vpsgb Namespace: ingress-nginx Priority: 0 Node: kubernetes-node02/192.168.6.72 Start Time: Sun, 16 May 2021 16:47:26 +0200 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/name=ingress-nginx pod-template-hash=55bc4f5576 Annotations: <none> Status: Running IP: 10.36.0.1 IPs: IP: 10.36.0.1 Controlled By: ReplicaSet/ingress-nginx-controller-55bc4f5576 Containers: controller: Container ID: docker://7daf566a039aba0d06f856b0adcc03659423ec2462c33d9a79f820b58dfcbf98 Image: k8s.gcr.io/ingress-nginx/controller:v0.46.0@sha256:52f0058bed0a17ab0fb35628ba97e8d52b5d32299fbc03cc0f6c7b9ff036b61a Image ID: docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:52f0058bed0a17ab0fb35628ba97e8d52b5d32299fbc03cc0f6c7b9ff036b61a Ports: 80/TCP, 443/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Args: /nginx-ingress-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=$(POD_NAMESPACE)/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key State: Running Started: Sun, 16 May 2021 16:47:28 +0200 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 90Mi Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: ingress-nginx-controller-55bc4f5576-vpsgb (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) LD_PRELOAD: /usr/local/lib/libmimalloc.so Mounts: /usr/local/certificates/ from webhook-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ftnfs (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhook-cert: Type: Secret (a volume populated by a Secret) SecretName: ingress-nginx-admission Optional: false kube-api-access-ftnfs: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none> logs from POD doesn't show anything... logs from ingress controller: I0516 14:47:28.871207 8 flags.go:208] "Watching for Ingress" class="nginx" W0516 14:47:28.871287 8 flags.go:213] Ingresses with an empty class will also be processed by this Ingress controller W0516 14:47:28.872068 8 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0516 14:47:28.872594 8 main.go:241] "Creating API client" host="https://10.96.0.1:443" I0516 14:47:28.887394 8 main.go:285] "Running in Kubernetes cluster" major="1" minor="21" git="v1.21.0" state="clean" commit="cb303e613a121a29364f75cc67d3d580833a7479" platform="linux/amd64" I0516 14:47:29.768986 8 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem" I0516 14:47:29.772688 8 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18" W0516 14:47:29.775841 8 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used. I0516 14:47:29.793896 8 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I0516 14:47:29.829161 8 nginx.go:254] "Starting NGINX Ingress controller" I0516 14:47:29.848934 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0cf6bc98-71b3-4387-a535-7d3dcb956fc8", APIVersion:"v1", ResourceVersion:"401441", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I0516 14:47:30.936661 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"apache-test-ingress", UID:"6e3c5757-28cf-4a68-be98-827fd69ee86f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"400092", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I0516 14:47:31.030103 8 nginx.go:296] "Starting NGINX process" I0516 14:47:31.030266 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx... I0516 14:47:31.030658 8 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key" I0516 14:47:31.031274 8 controller.go:146] "Configuration changes detected, backend reload required" I0516 14:47:31.040799 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx I0516 14:47:31.041189 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-55bc4f5576-vpsgb" I0516 14:47:31.054203 8 status.go:204] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-55bc4f5576-vpsgb" node="kubernetes-node02" I0516 14:47:31.129614 8 controller.go:163] "Backend successfully reloaded" I0516 14:47:31.129922 8 controller.go:174] "Initial sync, sleeping for 1 second" I0516 14:47:31.130053 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-55bc4f5576-vpsgb", UID:"16d9fca9-8ac9-4fc1-be40-056540857035", APIVersion:"v1", ResourceVersion:"401513", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I0516 14:48:31.054140 8 status.go:284] "updating Ingress status" namespace="default" ingress="apache-test-ingress" currentValue=[] newValue=[{IP:192.168.6.72 Hostname: Ports:[]}] I0516 14:48:31.067947 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"apache-test-ingress", UID:"6e3c5757-28cf-4a68-be98-827fd69ee86f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"401625", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync describe POD Name: apache-67487b7c8b-8jbgb Namespace: default Priority: 0 Node: kubernetes-node01/192.168.6.71 Start Time: Sun, 16 May 2021 15:13:07 +0200 Labels: app=apache-test pod-template-hash=67487b7c8b Annotations: <none> Status: Running IP: 10.44.0.1 IPs: IP: 10.44.0.1 Controlled By: ReplicaSet/apache-67487b7c8b Containers: apache: Container ID: docker://70e4e3c4e01dffa11aa3c945f297e2cf3bc8af249c8d900c8aa30381ce7f56e6 Image: httpd Image ID: docker-pullable://httpd@sha256:e4c2b93c04762468a6cce6d507d94def02ef4dc285278d0d926e09827f4857db Port: 80/TCP Host Port: 0/TCP State: Running Started: Sun, 16 May 2021 15:13:10 +0200 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8dfx (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-c8dfx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none> |
Why is my database restore failing with ConnectFailure exception? Posted: 16 May 2021 04:54 PM PDT I am trying to restore a SQL Server database to an AWS RDS instance from a full backup file created on a different non-RDS server. I am running the operation from an EC2 Windows Server instance in the same VPC (we'd like to keep the RDS instance with outside connections disabled). SSMS on that EC2 can connect to the RDS instance with no problem. The EC2 instance has an applied IAM role allowing it to read from S3, and I can see the bucket with the .bak file from there (using the CLI tool, for example). The RDS instance also has an IAM role allowing S3 reading. When I open a query window on the RDS instance in SSMS and run the command exec msdb.dbo.rds_restore_database @restore_db_name = 'database_name' , @s3_arn_to_restore_from = 'arn:aws:s3:::bucket-name/FULL_database_name_20210316050151.bak' the restore task is created. But anywhere from two to four minutes later, exec msdb.dbo.rds_task_status @db_name = 'database_name' shows that the task has been updated to a lifecycle value ERROR with this task_info value: [2021-05-15 16:27:41.183] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2021-05-15 16:27:41.213] Task has been aborted [2021-05-15 16:27:41.213] A WebException with status ConnectFailure was thrown. An hour of Googling A WebException with status ConnectFailure was thrown has turned up nothing relevant to my case, and the AWS doc keeps pointing me back to the IAM roles I've already created. What piece of this puzzle am I missing? Full disclosure: I am not an experienced sys admin; I work in a small shop with no dedicated admin, moving a web site to AWS for the first time. So please don't assume I've tried all the easy things, because it's entirely possible I don't know what all the easy things are. |
How would I set up my server block to redirect a domain that is not mine but is pointing to my server? Posted: 16 May 2021 09:30 PM PDT I have read through many similar questions and answers to this issue and don't exactly know how to set up my server block to handle it. How would I set up the following to point the wayward-domain.com somewhere else? server { listen 80; listen [::]:80; server_name example.com www.example.com; return 301 https://example.com$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; root /var/www/example.com/public_html; index index.php; server_name example.com www.example.com; } |
How to run Windows Ikev2 with NonetworkFirewall? Posted: 16 May 2021 08:05 PM PDT I have a small problem.I setup Firewall App Blocker in whtielist Mode(means basically It cut all Internet except allowed apps by setting "block all connection that not match firewall rule" in windows firewall). After Setting on whitelist I See My OpenVPN and IKev2 not working(as expected).So I added Openvpn daemon in connection allowed list and Now my OpenVPN start working correctly.But when I added rasclient from system32 on allowed list Ikev2 is not working,It successfully connected but internet is not working.I even added whole system32 executables(that appear in front when open system32) in whitelists but Ikev2 not working. So Please tell me which executable I allow to successfully connect Ikev2 like OpenVPN or any other rule to add in firewall to allow IKev2 connection without Disabling Whitelist Mode? Thanks for giving time to solve my Problem. |
Cannot change password on ESXi 6.7 after modifiying "Security.PasswordQualityControl" Posted: 16 May 2021 08:03 PM PDT See edit below for a workaround and possible explanation According to the VMware documentation (as well as the documentation for pam_passwdqc) I should be able edit the Security.PasswordQualityControl setting to allow different password complexity types. So for example, I would try to use the following string to allow a password with 1 chartype with minimum length of 20: retry=3 min=20,16,disabled,16,16 The problem is that after I make this change, I can no longer change any password for any user. When attempting a password change, I always receive the following error: Failed to set the password. It is possible that your password does not meet the complexity criteria set by the system. Furthermore, in the log is the actual error: Failed - A general system error occurred: passwd: Critical error - immediate abort Note that this is NOT the standard password complexity warning. The usual warning is something like Weak password: too short. . I expected that I had a typo in my setting string, but after poring over the docs I am convinced that the parameters are correct. The odd part is that I can modify the last 2 parameters (for 3 and 4 chartype passwords) to my heart's content without issue. For example min=disabled,disabled,disabled,4,4 is fine and would allow me to use 4 character passwords. But as soon as I attempt to modify the first 2 parameters, I get an critical error no matter whose password I try to change and no matter whether or no I meet the complexity requirements. It's as if changing those first 2 numbers breaks something. Does anyone have any idea how I can change this setting without breaking PAM? EDIT: I found a workaround for the moment but I believe that this behaviour is due to a bug in the pam_passwdqc module. The man page mentions the following: Each subsequent number is required to be no larger than the preceding one. It seems like the module considers a value of "disabled" to be equal to 0, therefore causing the above check to fail in my case. Since the above line specifically says "numbers" and it seems logical to be able to disable passphrase use (the 3rd number) while enabling the rest, the behaviour described in the man page is inconsistent with the program's behaviour and I consider this to be a bug. So in short, the following will work: min=disabled,20,20,12,12 But the following will not: min=30,20,disabled,12,12 |
How to Access a host behind a server over VPN (IP Forwarding) Posted: 16 May 2021 05:03 PM PDT I have a Ubuntu server (172.10.200.11) and many remote terminal units with simcards and each individual simcard operator using different IP pool such as 10.57.0.0/16 for OP-A, 10.112.0.0/16 for OP-B, etc. In the server actually i am using a routing table to be able to access the remote terminal units on diffent IP pools. Without this routing table i have no access to the IP pools. For instance : 10.57.0.0/16 using gw 172.10.238.1 10.112.0.0/16 using gw 172.10.238.2 10.155.0.0/16 using gw 172.10.238.3 And... Actually i am using "Cisco VPN Client" to access my server IP from outside of datacenter. For this purpose i am making a connection to VPN gateway using "Cisco VPN CLient" application then trying to ping my server's IP 172.10.200.11 I can do the following things successfully from my laptop after my VPN Connection established to VPN Gateway: - Establish Remote Desktop connection the server's IP:172.10.238.3
- Establish SSH connection from my laptop to the server's IP:172.10.238.3
- ICMP ping to the server's IP:172.10.238.3
- Traceroute to the server's IP:172.10.238.3
For checking the connection (up or down) status of Remote Terminal Units i am connecting to the server via SSH or Remote Desktop then trying to ping to the IP address of Remote Terminal Unit. Everything is OK until here but this way consumes too much bandwidth especially in case if connect via Remote Desktop connection. - Connect to VPN Gateway using "Cisco VPN Client" from the laptop
- Establish a Remote Desktop connection to server's IP
- Open Browser in the server on Remote Desktop Connection.
- Enter Remote Terminal Unit (RTU)'s IP (10.155.1.22) on the browser and do what you want!
- Successfullu establish an ICMP ping to the RTU's IP address (10.155.1.22) in a terminal screen on Remote Desktop or in a SSH connection session.
But i want to do following: - Connect to VPN Gateway using Cisco VPN Client from the laptop
- Open Browser in the laptop
- Enter Remote Terminal Unit's IP (10.155.1.22) on browser and do what you want!
- Successfully establish an ICMP ping to the RTU's IP address (10.155.1.22) in a shell/terminal screen on my laptop (NOT in Remote Desktop)
Restrictions: - Actually i have no right to change the VPN gateway settings. But i can only change the server settings to achieve this.
Is there any way to do this? I know it exist but my mind confused. First i have installed Hamachi but this way give me access to server without need of VPN connection. But still i can not directly ping to RTU IP's from my laptop. It did not resolved my problem. In technically i want to use my server act as a "router" to route/forward incoming requests from my laptop (ICMP & IP packets) to the RTU's IP. I have researched about how to enable IP MASQUERADE or IP FORWARDING on Ubuntu 16.04. If i am right -technically- it needs 2 NICs or 2 Different IPs on the machine. But i have only one IP (Bonded) on my server. I need to reach directly to the RTU IP addresses from my laptop and my server should be acting as a router/gateway or etc. to achieve this. Is there anybody can explain me step-by-step how to do this on Ubuntu? Click to see picture of my system diagram |
How much of my memory is actually used? Posted: 16 May 2021 06:02 PM PDT I have a VPS with 8 GB of memory running mainly docker containers. When I go to the cloud monitoring service provided by my provider, I see that ~10% of the RAM is used. I don't understand how this value is related to the output of the three following commands (I have put the outputs of these at the end of my question): $ free -mh $ top $ docker stats Could someone explain me why free -mh tells me that I am using 7.5/7.8GB of memory, while my VPS tells me I am only using 10%, and the values given by the top command do not make sense to me? free free -mh total used free shared buffers cached Mem: 7.8G 7.5G 275M 144M 348M 2.3G -/+ buffers/cache: 4.9G 3.0G Swap: 1.9G 454M 1.5G docker stats docker stats --no-stream CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 496ec398b3f9 0.09% 2.285 MiB / 7.812 GiB 0.03% 648 B / 648 B 40.96 kB / 0 B 0 e8d13a2df058 0.07% 674.2 MiB / 7.812 GiB 8.43% 196.4 kB / 252.3 kB 10.43 MB / 364.5 kB 0 87f43f54f772 0.01% 276.8 MiB / 7.812 GiB 3.46% 7.396 MB / 7.802 MB 71.07 MB / 0 B 0 6a9039c835ad 3.49% 885.1 MiB / 7.812 GiB 11.06% 150.6 kB / 20.93 MB 124.9 MB / 138.7 MB 0 5f56a4113665 0.91% 1014 MiB / 7.812 GiB 12.67% 2.681 GB / 368.8 MB 197.2 MB / 17.56 MB 0 8dabe37320f9 0.10% 3.762 MiB / 7.812 GiB 0.05% 270.1 MB / 213.4 MB 249.9 kB / 3.883 MB 0 57f6c6b96e72 0.34% 39.24 MiB / 7.812 GiB 0.49% 58.84 MB / 67.32 MB 6.246 MB / 4.649 MB 0 738e3d84e9d4 0.00% 3.562 MiB / 7.812 GiB 0.04% 1.048 GB / 1.045 GB 1.155 MB / 0 B 0 17704ca17a93 0.00% 49.04 MiB / 7.812 GiB 0.61% 271.8 MB / 1.484 GB 83.34 MB / 0 B 0 3beefb4fd14a 0.04% 31.12 MiB / 7.812 GiB 0.39% 342.5 kB / 875.2 kB 5.235 MB / 69.63 kB 0 4035cf7f0af5 0.04% 68.16 MiB / 7.812 GiB 0.85% 215.2 MB / 672.8 MB 148.9 MB / 16.09 GB 0 4fba55aa76a4 0.05% 42.61 MiB / 7.812 GiB 0.53% 147.5 kB / 19.09 kB 167 MB / 24.58 kB 0 83571a1747cb 0.00% 8.207 MiB / 7.812 GiB 0.10% 25.23 MB / 20.19 MB 13.8 MB / 8.192 kB 0 top top # Sorted by memory usage top - 10:14:19 up 20 days, 17:27, 1 user, load average: 0.36, 0.43, 0.47 Tasks: 284 total, 1 running, 283 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.0 us, 0.4 sy, 0.0 ni, 98.5 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 8191140 total, 7896220 used, 294920 free, 356984 buffers KiB Swap: 1998844 total, 465312 used, 1533532 free. 2449876 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8034 systemd+ 20 0 711.4m 469.8m 2.7m S 0.0 5.9 0:00.00 clamd 43964 998 20 0 712.7m 369.3m 8.1m S 1.7 4.6 144:06.61 bundle 44897 998 20 0 688.2m 226.1m 7.1m S 0.0 2.8 0:03.19 bundle 44895 998 20 0 507.8m 194.5m 6.9m S 0.0 2.4 0:02.27 bundle 46409 www-data 20 0 3728.6m 167.0m 4.7m S 0.0 2.1 10:42.66 node 43885 998 20 0 505.8m 158.7m 6.8m S 0.0 2.0 0:46.59 bundle 8058 Debian-+ 20 0 230.3m 114.9m 6.4m S 0.0 1.4 0:00.72 /usr/sbin/amavi 8057 Debian-+ 20 0 228.7m 113.3m 6.4m S 0.0 1.4 0:00.22 /usr/sbin/amavi 8052 Debian-+ 20 0 227.0m 112.1m 5.5m S 0.0 1.4 0:01.21 /usr/sbin/amavi 46410 www-data 20 0 998.3m 105.9m 4.5m S 0.0 1.3 9:20.22 node 46418 www-data 20 0 983.0m 95.9m 6.1m S 0.0 1.2 0:51.50 node 46415 www-data 20 0 993.0m 80.7m 4.2m S 0.0 1.0 0:19.56 node 46400 www-data 20 0 974.3m 78.4m 4.3m S 0.3 1.0 60:12.96 node 46402 www-data 20 0 991.1m 76.0m 4.7m S 0.0 1.0 0:20.13 node 46414 www-data 20 0 990.3m 75.7m 4.0m S 0.0 0.9 0:58.40 node 46411 www-data 20 0 971.2m 74.5m 3.9m S 0.0 0.9 7:59.44 node 46398 www-data 20 0 985.9m 71.8m 4.0m S 0.0 0.9 0:18.67 node 46408 www-data 20 0 957.8m 64.1m 3.6m S 0.0 0.8 8:40.84 node 311 mikael 20 0 636.8m 63.0m 57.4m S 0.0 0.8 0:00.18 php-fpm7.1 46413 www-data 20 0 1754.8m 52.8m 3.3m S 0.0 0.7 0:01.27 node 550 root 20 0 1872.5m 52.0m 19.7m S 0.0 0.7 96:58.89 dockerd 1415 999 20 0 966.3m 48.6m 4.9m S 0.0 0.6 19:56.53 mysqld 322 mikael 20 0 646.2m 47.1m 31.0m S 0.0 0.6 0:04.53 php-fpm7.1 321 mikael 20 0 646.2m 46.7m 30.8m S 0.0 0.6 0:05.11 php-fpm7.1 360 mikael 20 0 643.8m 44.4m 31.0m S 0.0 0.6 0:03.86 php-fpm7.1 13400 999 20 0 937.7m 34.0m 3.9m S 0.0 0.4 12:39.05 mysqld 30467 999 20 0 344.9m 25.6m 4.6m S 0.0 0.3 38:51.77 mongod 44940 996 20 0 59.1m 21.7m 7.7m S 0.0 0.3 0:00.14 postgres Some commands outputs $ uname -a Linux domain.com 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux $ docker version Client: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:17:57 2017 OS/Arch: linux/amd64 Server: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:17:57 2017 OS/Arch: linux/amd64 $ virt-what vmware |
openldap replication on centos 7 not replicated Posted: 16 May 2021 09:04 PM PDT I have setup a simple openldap server on centos 7 minimum. Added a couple of users and setup a client that can retrieve the users using getentpasswd and ldapsearch works fine between the two. Then I have tried to setup a simple replicator (consumer). No TLS at this stage, trying to keep it as simple as possible. For my syncprov on the provider I have inside the file /etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb/olcOverlay={0}syncprov.ldif: # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify. # CRC32 80120f94 dn: olcOverlay={0}syncprov objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: {0}syncprov olcSpSessionlog: 100 structuralObjectClass: olcSyncProvConfig entryUUID: ba668464-d734-1035-9bf8-97aa47bee689 creatorsName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth createTimestamp: 20160705194510Z entryCSN: 20160705194510.621665Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20160705194510Z Then I setup a new server called simple-replicator.example.com with the same base.ldif as my provider but did not create any users. I read that I should setup as a client first then install the ldap server, so I did that too. inside the file /etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb.ldif I have: # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify. # CRC32 3f0c6b1c dn: olcDatabase={2}hdb objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {2}hdb olcDbDirectory: /var/lib/ldap olcDbIndex: objectClass eq,pres olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub structuralObjectClass: olcHdbConfig entryUUID: 0f0af22a-d73a-1035-87b2-ddfb498f969e creatorsName: cn=config createTimestamp: 20160705202320Z olcSuffix: dc=example,dc=com olcRootDN: cn=Manager,dc=example,dc=com olcRootPW:: e1NTSEF9YVdkdGFid0dteVhqRVNTY0hGUVVTL3JYOW1xYTMyeE0= olcSyncrepl: {0}rid=001 provider=ldap://simple-provider.example.com:389/ bindmethod=simple bin ddn="cn=Manager,dc=example,dc=com" credentials={SSHA}UJzXEfBudfu5U6IGzFlea0 TjKUvxBtc/ searchbase="dc=example,dc=com" scope=sub schemachecking=on type= refreshAndPersist retry="1 3 10 3" interval=00:00:01:00 entryCSN: 20160705205808.847049Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20160705205808Z My provider is called simple-provider.example.com and my replicator server is called simple-replicator.example.com. But my command ldapsearch -x -b 'ou=People,dc=example,dc=com' returns nothing. The provider has not pushed the data into the replicator. What am I doing wrong? It is suppose to do it after 1 minute based on interval=00:00:01:00 Thank you in advance. |
AWS vmimport - stuck on booting phase Posted: 16 May 2021 07:05 PM PDT Currently importing an OVA from an S3 bucket. Windows 2008 R2 Standard Process stops at the booting phase "StatusMessage": "FirstBootFailure: This import request failed because the instance failed to boot and establish network connectivity.", This is a single volume machine that boots up fine if the OVA is reimported back to VMware. There is a logon disclaimer box configured to appear before choosing the account to logon to. I've followed the AWS VMimport pre-reqs, It is not domain joined, AV disabled, Windows Updates set to manual. A similar OVA has imported fine, so struggling to understand what is different about this one. Anyone able to offer a view on what might be the issue? |
How to create a SCSM Email subscription for when the action log of a PROBLE is updated Posted: 16 May 2021 09:04 PM PDT Sorry about the title, it wouldn't let me use the word PROBLEM. In our SCSM we have used SCSM 2012: Notify the analyst when an end-user comment is added to an incident to have our system automatically email the analyst when a comment is added to the incidents action log. I would like to also be able to automatically email the assigned to user of a Problem when a comment is added to the associated problems action log is updated. Anyone know how to accomplish this? |
VM iscsi disk crashes on one VM Host, not on the other Posted: 16 May 2021 10:03 PM PDT I have a VmWare solution running on a HP bladesystem with a Lefthand ISCSI san. There are currently two VmWare hosts in that environment. I have two Debian VM's sharing an ISCSI disk (with ocfs2), mounted directly from the san using open-iscsi. It all worked perfectly, but yesterday one client crashed as soon as it tempted to write something on the shared ocfs2 partition. I tried setting some ISCSI parameters to more conservative values, to no avail. Only (v-)moving the client to the other VM host resolved the problem. Today, moving the other client to the problematic host provokes the same errors: connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4294971299, last ping 4294966612, now 4294973799 connection1:0: detected conn error (1011) iscsid: Kernel reported iSCSI connection 1:0 error (1011 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3) kernel: [ 328.558970] connection1:0: detected conn error (1020) iscsid: connection1:0 is operational after recovery (1 attempts) [repeat until hard reset] It seems to be related to that VM host, wich has the exact same configuration as the other one. Being blades, they use the same networking hardware, a flex-10 interconnect. Does someone has any idea what this could be related to ? I'd like to find the cause, as both VM hosts could en up having the same problem (I'll have to switch to networked disks then, seems more stable, less prone to hard resets). |
How do I set up disk quotas over LDAP on CentOs? Posted: 16 May 2021 05:03 PM PDT I've been google-ing for some time and I haven't been able to find any resources or hints on the subject. I am wondering if it is possible to do so, if so how? Any nudge in right direction will be appricated. I do know that if you download and install "Linux Quota" from source, you'll get some perl scripts which are supposed to aid with the matter. But there is as far as I know absolutely no good documentation to help you along the way. I am also running a NFS server from the same machine. Note: This is for a university assignment, so I might be totally stupid for asking this question. I am trying to explore the options. If there is a better way of solving this, please do tell. Edit: Here is a link to the site of Linux Quota. They do include a LDAP schema, so it should be possible. |
nginx reverse proxy - try upstream A, then B, then A again Posted: 16 May 2021 09:54 PM PDT I'm trying to set up nginx as a reverse proxy, with a large number of backend servers. I'd like to start up the backends on-demand (on the first request that comes in), so I have a control process (controlled by HTTP requests) which starts up the backend depending on the request it receives. My problem is configuring nginx to do it. Here's what I have so far: server { listen 80; server_name $DOMAINS; location / { # redirect to named location #error_page 418 = @backend; #return 418; # doesn't work - error_page doesn't work after redirect try_files /nonexisting-file @backend; } location @backend { proxy_pass http://$BACKEND-IP; error_page 502 @handle_502; # Backend server down? Try to start it } location @handle_502 { # What to do when the backend server is not up # Ping our control server to start the backend proxy_pass http://127.0.0.1:82; # Look at the status codes returned from control server proxy_intercept_errors on; # Fallback to error page if control server is down error_page 502 /fatal_error.html; # Fallback to error page if control server ran into an error error_page 503 /fatal_error.html; # Control server started backend successfully, retry the backend # Let's use HTTP 451 to communicate a successful backend startup error_page 451 @backend; } location = /fatal_error.html { # Error page shown when control server is down too root /home/nginx/www; internal; } } This doesn't work - nginx seems to ignore any status codes returned from the control server. None of the error_page directives in the @handle_502 location work, and the 451 code gets sent as-is to the client. I gave up trying to use internal nginx redirection for this, and tried modifying the control server to emit a 307 redirect to the same location (so that the client would retry the same request, but now with the backend server started up). However, now nginx is stupidly overwriting the status code with the one it got from the backend request attempt (502), despite that the control server is sending a "Location" header. I finally got it "working" by changing the error_page line to error_page 502 =307 @handle_502; , thus forcing all control server replies to be sent back to the client with a 307 code. This is very hacky and undesirable, because 1) there is no control over what nginx should do next depending on the control server's response (ideally we only want to retry the backend only if the control server reports success), and 2) not all HTTP clients support HTTP redirects (e.g. curl users and libcurl-using applications need to enable following redirects explicitly). What's the proper way to get nginx to try to proxy to upstream server A, then B, then A again (ideally, only when B returns a specific status code)? |
how can connect from client to sql server 2008 via proxy Posted: 16 May 2021 06:02 PM PDT - I have SQL Server 2008 in dedicated server and all port of SQL Server is blocked with firewall.
- I have VPS with CCProxy on it.
- I opened all ports in dedicated server for my VPS server IP.
- I set proxy in internet option (Windows 7) and proxy is work in internet surfing, but I can't connect to SQL from client.
Is there any solution? |
How to log effective outgoing HTTP requests made by Squid, with headers after adaptation? Posted: 16 May 2021 07:05 PM PDT I use Squid to modify some HTTP headers sent by clients. For testing purposes, I want to completely delete the 'User-Agent' header. Here is my config : request_header_access User-Agent deny all header_replace User-Agent Timmy logformat mylogformat %>a [%{%H:%M}tl] "%rm %ru HTTP/%rv" %>Hs "Accept:%{Accept}>ha User-Agent:%{User-Agent}>ha" {%Ss:%Sh} I use the syntax '>ha' to log "The HTTP request headers after adaptation and redirection", as it is said here http://www.squid-cache.org/Versions/v3/3.1/cfgman/logformat.html Unfortunately, according to my logfile access.log, the header is not modified : x.x.x.x [19:18] "GET http://example.org/favicon.ico HTTP/1.1" 404 "Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent:Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:10.0.2) Gecko/20100101 Firefox/10.0.2" {TCP_MISS:DIRECT}` However, when requests are made to my Nginx server through the proxy, Nginx effectively logs correctly this header : [06/Mar/2012:19:18:07 +0100] "GET /sites/all/modules/colorbox/styles/default/images/controls.png HTTP/1.1" 200 2104 "http://example.org/sites/default/files/css/css_zpYGaC6A9wUNMpW3IPg55mz-mMAjvhuo-SZTcX-lqFQ.css" "Timmy" What is the right syntax to log correctly modified HTTP headers, urls rewritten, etc on squid log files ? My squid version is 2.7.STABLE9 and it runs on a Debian SQueeze 64bits. Thanks for your help |
Can you specifically assign / bind a virtual network adapter to the server's physical network interface on VMware Workstation? Posted: 16 May 2021 08:03 PM PDT Hi I'm trying to install Centos and Snort on a virtual machine and is there anyway I can bind the promiscuous virtual network adapter to a specific physical network interface on the server? I'm using VMWare workstation 7.1.4. If I recall correctly VM Virtual Box can do that if you set the Virtual Network Adapter to Bridged. However I can't seem to find that option in VMware Workstation. Thanks in advance. |
change Access Permissions in Component Services > COM Security with script/api? Posted: 16 May 2021 10:03 PM PDT How do I go about changing the Access Permissions for the COM Security? I need to write new values to "Edit Limits..." and "Edit Default..." |
No comments:
Post a Comment