Saturday, October 16, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


The error connecting database sql server instance name

Posted: 16 Oct 2021 10:35 PM PDT

Plz support error connecting to database with instance name

https://i.stack.imgur.com/ieePr.jpg

Thanks

Server is working but can't be accessed via IP on browser. Where do I start?

Posted: 16 Oct 2021 10:55 PM PDT

Server: CentosOS7 (cpanel) hosted with Godaddy

I did a backup and my website/cpanel/WHM suddenly became unavailable. My domain name resolves to the IP. My server is working. Cpanel is apparently working and when I do systemctl status exim.service (it is active).

Apache is running.

when I put the IP on the browser...safari says it can't connect to the server. However I can ping the IP address!

I am totally lost. any help would be appreciated.

Can't setup AD FS with SimpleSAMLphp, where do I get the metadata?

Posted: 16 Oct 2021 10:09 PM PDT

I'm following this article to setup AD FS SSO with PHP: https://stratbeans.medium.com/how-to-integrate-active-directory-in-php-application-for-sso-22eb62b6b866

I've successfully setup nginx + php-fpm on Ubuntu 20.04, and I'm stuck here:

enter image description here

Where can I find the metadata on AD FS?

Apache2 simple reverse proxy forwarding http traffic to lxd container only loads plain html very slowly on graphic browser

Posted: 16 Oct 2021 08:34 PM PDT

The actual web servers run on LXD containers, while the host Apache2 simply forwards http traffic to the containers. The setup is simple on the host, everything else is default:

<VirtualHost *:80>  ServerName example.com  ServerAlias www.example.com  ProxyPass / http://lxd.container.ip/  ProxyPassReverse / http://lxd.container.ip/  </VirtualHost>  

For whatever reason, the website loads very slowly on graphic browsers and when it finally loads, it only renders plain html. On text browsers the website finishes loading almost instantly. If the proxy is set through LXD as a device via

lxc config device add mycontainer http proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80  

the website behaves correctly, but this method doesn't allow me to share one public IP for all sites on the same port. I also need to isolate the server for each site on separate containers, so I cannot just put them in different document roots.

How do I view a docker image's console trace, in Kubernetes Dashboard?

Posted: 16 Oct 2021 07:47 PM PDT

I'm working through a tutorial that uses Kubernetes: .NET Microservices – Full Course

The instruction is aimed at MS Windows 10, but I'm working through it in both Windows and Linux, just because.

The actual code is written in .NET 5.0 using VS Code, both of which work fine in both Windows and Linux. The instruction uses Docker Desktop, but for the docker-specific stuff I've been able to use the standard Docker (docker/focal,focal 1.5-2 all) and that's worked fine, so far.

But the tutorial relies on the Docker Desktop installation of Kubernetes, and Docker Desktop hasn't actually been released for Linux, quite yet. So I'm using MicroK8S, which has installed and run fine, I think, with two issues.

  1. In Windows when I apply a deployment yaml file using the Docker Desktop Kubernetes install, it creates a pod and runs a deployment, and the running docker container shows up in a "docker ps" listing. When I apply the same yaml file in Linux, using MicroK8S, it looks like it is working, but the container does not show up in "docker ps".

  2. In Windows you can list the depolyments in Docker Desktop, and by clicking on one you can see the console trace of the docker container. See timestamp 3:11:10 in the linked video. When I run in Linux, the Docker Desktop GUI isn't available. MicroK8S does make the Kubernetes Dashboard available, and in it I can see my pods, deployments, and replica sets.

What I have not figured out is how to view the docker console trace in Kubernetes Dashboard.

Any ideas?


Note - I have figured out how to view the trace using the kubectl command line:

microk8s kubectl logs platforms-depl-5dd6f7cb9-x2r4k platformservice  

I'm sure there is some way of doing this from the Kubernetes Dashboard GUI, but I haven't found it.

What is this hacker trying to achieve?

Posted: 16 Oct 2021 10:12 PM PDT

I have hundreds of lines like this in my syslog from many different ip addresses:

Oct 16 17:03:06 example named[857]: client @0x7fa2dc083e40 104.190.220.183#3075 (sl): query (cache) 'sl/ANY/IN' denied  Oct 16 17:03:06 example named[857]: client @0x7fa2d812bc70 90.196.21.194#80 (sl): query (cache) 'sl/ANY/IN' denied  

a reverse dns on 104.190.220.183 gave Hostname: 104-190-220-183.lightspeed.tukrga.sbcglobal.net. a reverse dns on 90.196.21.194 gave Hostname: 5ac415c2.bb.sky.com.

What is this hacker trying to achieve and should I be concerned as the attempt is denied?

Unable to forwart ports on any docker

Posted: 16 Oct 2021 03:43 PM PDT

So I was trying to set up some websites and other stuff on my Ubuntu 20.04.3 LTS Server. Different WebUI bases apps in different dockers accesed via a nginx. But I want able to connect to ANY of my dockers. After trubble shooting it seems that the port forwarding doesnt work on none of my dockers. It started with my node-red docker, nginx didnt work either and I now have setup a very simple whoami docker wis a very basic webserver running on port 8000. I'm still not able to connect to it.

If I go into the docker using an "docker exec -ti whoami sh" I am able to access the webserver via wget but not from outside the docker.

I have searched a lot and most issues where false use of the -p flag or the webserver only listening to localhost and so on. Both is not the case here.

Here is my test terminal output to show whats happening

$~ docker ps  CONTAINER ID   IMAGE            COMMAND       CREATED              STATUS              PORTS                                       NAMES  726da9705b7f   jwilder/whoami   "/app/http"   About a minute ago   Up About a minute   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp   whoami  $~   $~ wget -O - http://127.0.0.1:8000/  --2021-10-16 23:19:09--  http://127.0.0.1:8000/  Connecting to 127.0.0.1:8000... connected.  HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.  Retrying.    --2021-10-16 23:21:21--  (try: 2)  http://127.0.0.1:8000/  Connecting to 127.0.0.1:8000... connected.  HTTP request sent, awaiting response... ^C  $~   $~ docker exec -ti whoami sh  /app # wget -O - http://127.0.0.1:8000/  Connecting to 127.0.0.1:8000 (127.0.0.1:8000)  I'm 726da9705b7f  -                    100% |*****************************************************|    17   0:00:00 ETA  /app # exit  $~   $~ cat run.sh  docker run -d -p 8000:8000 --name whoami -t jwilder/whoami  $~   $~   

Proof that the server is not fixed to local host:

$~   $~ docker logs -f whoami  Listening on :8000  I'm 726da9705b7f  ^C  $~   $~ docker exec -ti whoami sh  /app # ./http  Listening on :8000  2021/10/16 22:36:23 listen tcp :8000: bind: address already in use  /app # exit  $~   $~   

And as far as I can tell the docker service actualy take the port:

$~   $~ sudo lsof -i:8000  COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME  docker-pr 24976 root    4u  IPv4 112329      0t0  TCP *:8000 (LISTEN)  docker-pr 24984 root    4u  IPv6 114001      0t0  TCP *:8000 (LISTEN)  $~   $~   

But its still not working.

Does anybody has an idea what is going wrong I am very clueless.

nftable produce unexpected message in syslog

Posted: 16 Oct 2021 06:00 PM PDT

I have the following nftables rule: log prefix "[nftables] output denied1: " ip daddr 34.117.59.81 reject

in syslog i can see the message: [nftables] output denied1: IN= OUT=br0 SRC=10.10.10.1 DST=10.10.10.4 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=540 PROTO=ICMP TYPE=0 CODE=0 ID=2 SEQ=60848

Now i wonder how it is possible? In the syslog message there is DST=10.10.10.4, but the rule shouldn't be used for that destination address.

It would be realy cool, if anyone can explain this behaviour.

What happens when using a crossover/straight-through cable for similar/dissimilar device?

Posted: 16 Oct 2021 10:34 PM PDT

I tried looking around, but I keep getting answers such to use this or use that for X device(s) and worse case sometimes it may work due to Auto-MDI/MDIX figuring out your cable.

I think I know the answer, but I just do not want to presume. So what happens when you use say a crossover cable for a network to a PC connection? Or when you use a straight-through cable on a router to router? Is there simply no connection if Auto-MDI/MDIX does not exist? If it did work would there be say intermittent connection issue or any faults on the line?

Does Auto-MDI/MDIX have any affect on resources or slow the connection or is it simply a translation in which wire standard to use?

How to Accelerate Firewalld or should it be abandoned for nftables instead?

Posted: 16 Oct 2021 08:15 PM PDT

We have a problem where we set up a server running a service and it is capable of hundreds of simultaneous connections on port 3535 (arbitrarily assigned for this application). We have firewalld running on this near-end-host allowing connections from the far-end host and that is all working fine. The problem we ran into is the far-end-host is only able to establish a few connections at a time and it is taking upwards of 30 seconds to get those connections. The most we have seen on the near-end-receiving host is about 35 connections on average. We turned firewalld off and immediately it went to 850 connections and the far-end reported no problems and no delays when connecting and ran flawlessly for 15 minutes (until we turned firewalld back on).

We have a very simple rule set and are not doing any kind of throttling. Is there default throttling in firewalld that I need to disable or should I go to nftables and if so will it actually perform better or am I chasing a ghost? My ISP is not using VMWARE and so no external solution is available.

Thanks in advance. David

Install godaddy ssl certificate on nginx, pem, bundle, crt

Posted: 16 Oct 2021 04:05 PM PDT

It's a bit unclear, by available instructions and forum posts, how to deal with the three files you'll get from Godaddy when purchasing a SSL Certificate from them. Godaddy isn't very forthright explaining it. In hindsight, now when knowing how to do it, one might think it is unwise of them not to detail this in instruction attached to the purchase; as it is not trivial to get it working.

When purchase Standard SSL certificate (Starfield SHA-2) or (Godaddy SHA-2) at GoDaddy. You indicate which server type you have and download a zip package. in the process, you also download two txt files.

For Nginx, you indicate server type 'other' and your zip file contains 3 files (1-3). In the process, also two more files are created (4-5) saved separately:

  1. 3423l4kj23l4j.crt
  2. 3423l4kj23l4j.pem
  3. sf_bundle-g1-g1.crt
  4. generated-private-key.txt
  5. generated-csr.txt

when opened in notepad, 1 and 2 above are identical

'-----BEGIN CERTIFICATE-----  MM123XXXXXX  XXXXXXXO8km  -----END CERTIFICATE-----'  

sf_bundle-g1-g1.crt above does not contains 1 or 2, but instead three separate entries

'-----BEGIN CERTIFICATE-----  XXXX1  XXXX2  -----END CERTIFICATE-----  -----BEGIN CERTIFICATE-----  XXXX3  XXXX4  -----END CERTIFICATE-----  -----BEGIN CERTIFICATE-----  XXXX5  XXXX6  -----END CERTIFICATE-----'  

generated-private-key.txt is unique

'-----BEGIN PRIVATE KEY-----  XXXX7  XXXX8  -----END PRIVATE KEY-----'  

and, finally, generated-csr.txt, is also unique

'-----BEGIN CERTIFICATE REQUEST-----  XXXX9  XXXX0  -----END CERTIFICATE REQUEST-----'  

In Nginx:

  1. I have created a folder, /etc/nginx/ssl
  2. I edit /etc/nginx/sites-enabled/default.conf as below

;

server {          listen 80 default_server ;          listen [::]:80 default_server ;  

I have changed this to:

server {          listen 443 ssl ;          listen [::]:443 ssl ;          server_name example.com;            ssl_certificate /etc/nginx/ssl/ ?????????.crt;          ssl_certificate_key /etc/nginx/ssl/ ???????.key;  

As I it is a bit unclear what is what, and what a pem and bundle is, I'd like to ask which of the unzipped files goes where ?:

  • ssl_certificate = crt, pem, bundle, gen_crt?
  • ssl_certificate_key = pem or private key?

UPDATE I did as @nikita-kipriyanov suggested, this worked.

  • combined/concatenate by: 3423l4kj23l4j.pem sf_bundle-g1-g1.crt > fullchain.pem This would become the ssl_certificate file
  • renamed the generated-private-key.txt into a privkey.pem file, then change file encoding of it: sudo iconv -c -f UTF8 -t ASCII privkey.pem >> privkey.pem

How to do Bare Metal Deployment via MDT from the Cloud?

Posted: 16 Oct 2021 10:31 PM PDT

I have tried the steps mentioned in the link.

Here, instead of local MDT deployment share, configured IIS to access it through HTTP/HTTPS. But it is not still linked to unc path, which can't be accessible over the internet.

After configuring the IIS as per the steps you mentioned.

The deployment share in the Listtouch iso is still trying to access the UNC path.

litetouch image deploy error

Because of this, the internet deployment is not working

Please suggest how to fix this?

In the below snap of the reference link I shared, it showed deployment share linked to http link, not UNC, How to keep like this here?

The IIS configuration specified there is done already, and I can access that deployment share through browser. reference image with http link

But it is not linked here.

kolla-ansible openstack cloudkitty error

Posted: 16 Oct 2021 09:08 PM PDT

I'm using an All-IN-ONE kolla-ansible wallaby release machine for developing a custom ui for a public cloud. When i try to get summary in RATING admin menu in Horizon this error happens:

2021-10-14 11:46:19.756 28 ERROR cloudkitty.common.policy ...   - default default] Policy check for report:get_summary failed with credentials {'user': '2e69fcab25f8423693661478d155dca1', 'tenant': '66233f955a644a7586aab636e78a5a4a', 'system_scope': None, 'project': '66233f955a644a7586aab636e78a5a4a', 'domain': None, 'user_domain': 'default', 'project_domain': 'default', 'is_admin': True, 'read_only': False, 'show_deleted': False, 'auth_token': 'gAAAAABhaBiLpir5wU9Cw5Guv9sb2n4H45dkJACzC0KkgZNvioDBN1GCnOxXlZ-Wa9KUj_eJRuavqXISEckq-d37m9MBfeCGrY9S06K-09B1R5Pk8bEdNkVfCmJ7pBhabjVJNMgZK4xTVW2vhknchr3b9ATZsSzLRNq1CR__NETnPfJsBTv0-9jn0NorMMVSIDOp3V0G1dbK', 'request_id': 'req-f6ff3382-22e8-4310-a944-6dff7e07a656', 'global_request_id': None, 'resource_uuid': None, 'roles': ['admin', '_member_', 'reader', 'member'], 'user_identity': '2e69fcab25f8423693661478d155dca1 66233f955a644a7586aab636e78a5a4a - default default', 'is_admin_project': True}:   cloudkitty.common.policy.PolicyNotAuthorized: Policy doesn't allow report:get_summary to be performed.  

I've installed cloudkitty cli using pip in a python virtual env but i can't find how i can change policies. there's noting about that when using -h switch for showing helps.

I've added ceilometer, gnocchi and cloudkitty users to service and admin and other projects as admin but no changes in errors.

Also i've enabled HashMap module for instance service and make a map for service but on instance creation window in horizon price is 0 and there's an error in the api side:

2021-10-14 11:50:45.156 28 ERROR wsme.api [req-fd3ab604-bb45-40c7-9965-f2c51c448256 2e69fcab25f8423693661478d155dca1 66233f955a644a7586aab636e78a5a4a - default default] Server-side error: "'list' object has no attribute 'start'  Traceback (most recent call last):      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming      res = self.dispatcher.dispatch(message)      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch      return self._do_dispatch(endpoint, method, ctxt, args)      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch      result = func(ctxt, **new_args)      File "/usr/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 120, in quote      return str(worker.quote(res_data))      File "/usr/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 223, in quote      processor.obj.quote(res_data)      File "/usr/lib/python3.6/site-packages/cloudkitty/rating/__init__.py", line 106, in quote      return self.process(data)      File "/usr/lib/python3.6/site-packages/cloudkitty/rating/hash/__init__.py", line 262, in process      output = dataframe.DataFrame(start=data.start, end=data.end)    AttributeError: 'list' object has no attribute 'start'  ". Detail:  Traceback (most recent call last):      File "/usr/lib/python3.6/site-packages/wsmeext/pecan.py", line 85, in callfunction      result = f(self, *args, **kwargs)      File "/usr/lib/python3.6/site-packages/cloudkitty/api/v1/controllers/rating.py", line 205, in quote      res = client.call({}, 'quote', res_data=[{'usage': res_dict}])      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 179, in call      transport_options=self.transport_options)      File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send      transport_options=transport_options)      File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 682, in send      transport_options=transport_options)      File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 672, in _send      raise result    AttributeError: 'list' object has no attribute 'start'  Traceback (most recent call last):      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming      res = self.dispatcher.dispatch(message)      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch      return self._do_dispatch(endpoint, method, ctxt, args)      File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch      result = func(ctxt, **new_args)      File "/usr/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 120, in quote      return str(worker.quote(res_data))      File "/usr/lib/python3.6/site-packages/cloudkitty/orchestrator.py", line 223, in quote      processor.obj.quote(res_data)      File "/usr/lib/python3.6/site-packages/cloudkitty/rating/__init__.py", line 106, in quote      return self.process(data)      File "/usr/lib/python3.6/site-packages/cloudkitty/rating/hash/__init__.py", line 262, in process      output = dataframe.DataFrame(start=data.start, end=data.end)    AttributeError: 'list' object has no attribute 'start'  

I made changes those python files with this patch and restart the cloudkitty_api docker container, but had no success.

I'm using cloudkitty, ceilometer, gnocchi (kolla containers) to achieve a billing system.

Also i made a downgrade from Wallaby to USSURI but errors are same.

This is my kolla-ansible global.yml

config_strategy: "COPY_ALWAYS"  kolla_base_distro: "ubuntu"  kolla_install_type: "source"  openstack_release: "wallaby"  kolla_internal_vip_address: "192.168.76.10"  network_interface: "eno1"  neutron_external_interface: "eno2"  neutron_plugin_agent: "openvswitch"  enable_haproxy: "no"  enable_ceilometer: "yes"  enable_cinder: "yes"  enable_cinder_backup: "no"  enable_cinder_backend_lvm: "no"  enable_cloudkitty: "yes"  enable_gnocchi: "yes"  enable_neutron_provider_networks: "yes"  ceph_cinder_keyring: "ceph.client.admin.keyring"  ceph_cinder_user: "admin"  ceph_cinder_pool_name: "volumes"  fernet_token_expiry: 86400  cinder_backend_ceph: "yes"  cinder_volume_group: "volumes"  nova_compute_virt_type: "kvm"  nova_console: "novnc"  enable_openstack_core: "yes"  

So any idea?

RemainAfterExit in Upstart

Posted: 16 Oct 2021 05:23 PM PDT

Is there an Upstart equivalent to systemd's RemainAfterExit?

I have an upstart task that exec's a script that completes quickly when the task is started. However, I would still like that task to report as active so that I can subsequently 'stop' the task and have it execute a cleanup script.

In systemd, I would do the following:

[Service]  Type=oneshot  RemainAfterExit=true  ExecStart=/usr/local/bin/my_script.sh create %i  ExecStop=/usr/local/bin/my_script.sh delete %i  

How would I do the same thing in Upstart?

Dockerfile Works Locally But Fails on EB (Elastic Beanstalk) Deploy (PHP 7.3 with OCI8 extension)

Posted: 16 Oct 2021 03:57 PM PDT

Good day, fellow developers!

I have been searching for 2 weeks now on how to install the OCI8 PHP extension on Elastic Beanstalk using .ebextensions but sadly I can't search for similar ones.

Before I arrived at the conclusion to use .ebextensions, I tried the Docker approach first. I created an image with OCI8 PHP extension and Oracle Instant Client dependencies. It was working fine on my local Docker Hub app but errors appeared when I tried deploying it to EB.

After reading some more information, I stumbled upon this AWS article: How do I install PECL 7 modules on Elastic Beanstalk environments running on PHP with Amazon Linux 1 stacks?. From that, I concluded that this is the best option in my case. The problem now is there are almost no articles that can be found which pertain to OCI8, Elastic Beanstalk, and .ebextensions.

Has anyone tried using the .ebextensions config files to install the OCI8 PHP extension? Any clue will really help.

How to restart a single container in AWS ECS Task Definition

Posted: 16 Oct 2021 03:01 PM PDT

In my AWS ECS Cluster, there is one service running two tasks. Each Task has 5 containers. Two of them are not essential. Among these two, one of the containers fails some times, but I am not sure how to restart the single container.

docker-compose.yml has an option restart: always. I am assuming some similar functions may restart the container automatically.

Is there any way to restart a single container without touching other containers in ECS Task?

Alpine Linux timezone doesn't stick if tzdata is removed

Posted: 16 Oct 2021 04:01 PM PDT

This used to work to set the timezone. I have a container on Alpine 3.9.4 where it worked:

RUN apk add --no-cache tzdata  ENV TZ America/Chicago  RUN apk del tzdata  

I'm now creating a Docker container with Alpine Linux v3.10.3, and it doesn't work anymore. A user suggested that I need to copy to /etc/localtime:

RUN apk add --no-cache tzdata  ENV TZ America/Chicago  RUN cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone  RUN apk del tzdata  

Neither of these work if tzdata is removed. However, they work if tzdata is not removed. Why is this?

Related Question

Extended Support for Windows Server 2008 on Azure

Posted: 16 Oct 2021 05:06 PM PDT

We have a SharePoint 2010 Farm on premise. The Extended support for Windows server 2008 ends in January 2020. The Microsoft documentation here(https://support.microsoft.com/en-in/help/4456235/end-of-support-for-windows-server-2008-and-windows-server-2008-r2) mentions that if the Windows 2008 servers are migrated to Azure, the customers would get 3 additional years of Critical and Important security updates at no additional charge. We would like to know if the support for SharePoint 2010 and the SQL Server 2008 R2 support would also be extended? What are the Microsoft guidelines for SharePoint 2010 and SQL Server 2008?

Unable to locate package Nginx-module-GeoIP

Posted: 16 Oct 2021 07:01 PM PDT

I am working on Debian Jessie 9. I have installed Nginx but there is no GeoIP module.

so I decided to install it but "apt-get install nginx-module-geoip" not working giving an error like E: Unable to locate package Nginx-module-GeoIP

how I can install GeoIP module in Nginx

Fortigate to Azure VPN -- connected but can't reach anything

Posted: 16 Oct 2021 09:05 PM PDT

I have set up an IPSec VPN between a Fortigate and Azure, according to the following instructions:

https://cookbook.fortinet.com/ipsec-vpn-microsoft-azure-56/

The VPN connected the first time, but I cannot see the virtual server from the local network, or anything on the local network from the server.

My configuragion is as follows:

  • Local network: 10.1.0.1/21
  • Azure v-net: 10.1.100.0/23
  • Azure subnet: 10.1.100.0/25
  • Azure gateway subnet: 10.1.101.0/24

I have tried pinging or RDP'ing to my server (10.1.100.10) from my computer (on the LAN), or pinging my computer from the server. Nothing results (firewalls down, or pinging from other locations).

I already created the static route and the policies in the Fortigate.

Although not on the instructions, I tried creating a routing table in Azure with the local network subnet going through the Virtual Network.

Any ideas on what I should try next?

Thanks!! -- Luis

WSUS Synchronization Schedule - Once per month?

Posted: 16 Oct 2021 10:04 PM PDT

WSUS has no option for syncing once per month. It requires a daily synchronization. Is it possible to change this to once per month through GPO or other means? Is the only other option to manually sync?

HTTP/2 between Nginx reverse proxy and Express

Posted: 16 Oct 2021 07:01 PM PDT

I have an Express web server behind Nginx reverse proxy.
The Nginx is configured for HTTP/2.

Is it better to leave the default http1 connection between Nginx and Express, or is there worth in upgrading Express to HTTP/2 also?

I guess there'll be some performance loss since SSL is required on both, but don't know whether multiplexing (and other improvements) will make up for it.

Cannot Join Domain by Name but can Ping DC IP and Domain Name

Posted: 16 Oct 2021 05:06 PM PDT

I am trying to connect a Windows 7 client to a Domain, the Domain was created on Windows 2012 Server (Core Version) and is fully working on that.

From the Win 7 Client, I can Ping "10.0.0.2" and "xyz.com", but I cannot seem to Join the Domain.

The following error occurs:

Could Not Be contacted Error

Also the Win 7 Client IP is in the same range as the DC (Client - 10.0.0.20) and (DC - 10.0.0.2)

And the Client DNS IP is set to the Servers IP.

Trouble with windows persistent route

Posted: 16 Oct 2021 06:03 PM PDT

I have a laptop that is connected wirelessly to 192.168.1.0/24 network using DHCP and wired to 10.10.10.0/24 network static settings with NO DEFAULT GATEWAY set up.

The goal was to access to external addresses using the default gateway on the wireless network (192.168.1.1) and the internal network (10.10.10.0/24, 10.10.20.0/24 and so on... 10.10.60.0/24) using the wired NIC. So I've added the following persistent routes:

===========================================================================  Persistent Routes:      Network Address          Netmask  Gateway Address  Metric           10.10.60.0    255.255.255.0       10.10.10.1       1         10.10.50.0    255.255.255.0       10.10.10.1       1         10.10.40.0    255.255.255.0       10.10.10.1       1         10.10.20.0    255.255.255.0       10.10.10.1       1         10.10.30.0    255.255.255.0       10.10.10.1      11         10.10.10.0    255.255.255.0       10.10.10.1       1    ===========================================================================    The routing table is the following:  IPv4 Route Table    ===========================================================================  Active Routes:    Network Destination        Netmask          Gateway       Interface  Metric              0.0.0.0          0.0.0.0      192.168.1.1    192.168.1.110      2         10.10.10.0    255.255.255.0         On-link       10.10.10.27    266         10.10.10.0    255.255.255.0       10.10.10.1      10.10.10.27     11        10.10.10.27  255.255.255.255         On-link       10.10.10.27    266       10.10.10.255  255.255.255.255         On-link       10.10.10.27    266         10.10.30.0    255.255.255.0       10.10.10.1      10.10.10.27     21         10.10.50.0    255.255.255.0       10.10.10.1      10.10.10.27     11          127.0.0.0        255.0.0.0         On-link         127.0.0.1    331          127.0.0.1  255.255.255.255         On-link         127.0.0.1    331      127.255.255.255  255.255.255.255         On-link         127.0.0.1    331          192.168.1.0    255.255.255.0         On-link     192.168.1.110    257      192.168.1.110  255.255.255.255         On-link     192.168.1.110    257      192.168.1.255  255.255.255.255         On-link     192.168.1.110    257          224.0.0.0        240.0.0.0         On-link         127.0.0.1    331          224.0.0.0        240.0.0.0         On-link       10.10.10.27    266          224.0.0.0        240.0.0.0         On-link     192.168.1.110    257      255.255.255.255  255.255.255.255         On-link         127.0.0.1    331      255.255.255.255  255.255.255.255         On-link       10.10.10.27    266        255.255.255.255  255.255.255.255         On-link     192.168.1.110    257  

But, after all this the packets take the wrong way:

C:\WINDOWS\system32>tracert -d 10.10.60.1    Tracing route to 10.10.60.1 over a maximum of 30 hops      1     5 ms     3 ms     3 ms  192.168.1.1  ==============================================  ^C    C:\WINDOWS\system32>  

Why do the packets take the 192.168.1.1 way ?

Shouldn't packets go the persistent route (10.10.10.1 )?

Apache ReverseProxy settings for Portainer

Posted: 16 Oct 2021 09:05 PM PDT

I currently have a couple docker containers running on a single node, two of which are an Apache web server that I have configured as a reverse proxy and Portainer which allows me to manage my containers via GUI.

I have tried following this thread: https://github.com/portainer/portainer/issues/488 but have been unable to forward traffic from Apache to Portainer.

Here is my httpd.conf file:

<Location /portainer/>  AuthBasicProvider ldap  AuthLDAPURL someldap  AuthType Basic  AuthName SomeAuthName  require valid-user  </Location>    ProxyPass /portainer/api/websocket/ ws://172.18.0.8:9000/api/websocket/    </VirtualHost>  

Any ideas?

Thank you!

Setting Default Printer FOR THE System User in Windows Server 2012

Posted: 16 Oct 2021 08:02 PM PDT

Is there a way to set the default printer for the SYSTEM USER?

Or, alternatively is there a way to set the default printer for ALL users of the server?

I am a C# \ SQL Server Developer by trade, so this stuff is beyond me a bit, and GOOGLE hasn't been of much use. ( all old as dirt posts, nothing specific to 2012 )

Basic Use Case:

If I log in as a standard user, I can look at the list of printers and then right click on one of the printers and set it as my default.

Rather than doing that for each user on the server, is there a way to set the default printer for all users?

Or - specifically is there a way to set the default printer for the SYSTEM account?

MONIT: Monitoring logfile on counting and changes of timestamp

Posted: 16 Oct 2021 10:04 PM PDT

I want to monitor a logfile and I am only interested in the "Received new block" lines. I need two different scripts to monitor

  • The height, which should always be one number higher then the height in the previous "Received new block"-line. If it's not +1 AND if it's not changing within 120 seconds THEN alarm.
  • The timestamp (only for the "Received new block"-lines), which should always change. If no change occurs for 120 seconds THEN alarm.

All other lines are not of interest here and can be ignored. I tried to find any examples to bring this together but I am still not successful, so I hope you can help me.

log-snippet

{"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:33","data":50}  {"level":"info","message":"Checking blockchain on 11.22.33.44:1234","timestamp":"2016-04-30 19:49:33"}  {"level":"warn","message":"Balance queue","timestamp":"2016-04-30 19:49:39","data":50}    {"level":"info","message":"Received new block id: 12345678901234567890 height: 8761 round: 87 slot: 3350818 reward: 100000000","timestamp":"2016-04-30 19:49:41"}    {"level":"info","message":"Removing peer POST http://11.22.33.44:1234/peer/transactions","timestamp":"2016-04-30 19:49:42"}  {"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:43","data":94}  {"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:43","data":93}  {"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:43","data":52}  {"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:43","data":51}  {"level":"warn","message":"Main queue","timestamp":"2016-04-30 19:49:43","data":50}  {"level":"info","message":"Checking blockchain on 11.22.33.44:1234","timestamp":"2016-04-30 19:49:44"}  {"level":"info","message":"Removing peer POST http://11.22.33.44:1234/peer/blocks","timestamp":"2016-04-30 19:49:46"}    {"level":"info","message":"Received new block id: 12345678901234567890 height: 8762 round: 87 slot: 3350819 reward: 100000000","timestamp":"2016-04-30 19:49:50"}  

monitrc

set daemon 120            # check services at 2-minute intervals  set logfile /var/log/monit.log  set idfile /var/lib/monit/id  set statefile /var/lib/monit/state  set mailserver SMTP.MAILHOSTER.COM port 587         # primary mailserver       username "LoginUsername" password "LoginPassword"       using ssl       with timeout 30 seconds  set eventqueue        basedir /var/lib/monit/events # set the base directory where events will be stored        slots 100                     # optionally limit the queue size  set   mail-format {          from: SEND@MAILHOSTER.COM          subject: ALARM on Test-Server -- $EVENT $SERVICE          message: $EVENT Service $SERVICE          Date:        $DATE          Action:      $ACTION          Host:        $HOST          Description: $DESCRIPTION            Bye,          Monit  }  set alert RECEIVE@example.net       # receive all alerts  include /etc/monit/conf.d/*  

nginx - 502 Bad Gateway ubuntu 14.04 aws ec2 django project + gunicorn

Posted: 16 Oct 2021 04:01 PM PDT

I'm trying to get my django project up and running on an aws ec2 instance. I'm using gunicorn with nginx, and I'm not really sure how I can tackle this problem. I've spent a couple hours on it already, including looking at other posts on this site.. but I'm still stuck. Here's what's wrong: Along with the 502 Bad Gateway, my nginx error logs keep giving me back this:

2015/07/17 08:32:32 [error] 8049#0: *18 connect() failed (111: Connection refused) while connecting to upstream, client: ip.ip.ip.ip, server: ip.ip.ip.ip, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "ec2-numbers.us-west-1.compute.amazonaws.com"  

My /etc/nginx/sites-available/at_api.conf looks like this (Is the indentation okay on this?):

server {  listen 80;  server_name ip.ip.ip.ip;  access_log /var/log/nginx/site_access.log;  error_log /var/log/nginx/site_error.log;  location /static/ {  alias /home/ubuntu/static/;  }  location / {  proxy_pass http://127.0.0.1:8001;  proxy_set_header X-Forwarded-Host $server_name;  proxy_set_header X-Real-IP $remote_addr;  proxy_set_header Host $host;  }  }  

This is my first time setting up my django project on ec2... so I'm not really sure if this is the right way to be doing this. Any tips? p.s. I've seen another similar post saying that php-fpm wasn't configured properly, but I'm using django, so I'm not using any php.

Edit: My at_api/gunicorn.conf.py

proc_name = "at_api"  bind = '127.0.0.1:8001'  loglevel = "error"  workers = 2  

Edit 2: Netstat

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name  tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -  tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      -  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      8463/nginx: worker  tcp6       0      0 :::22                   :::*                    LISTEN      -  udp        0      0 0.0.0.0:68              0.0.0.0:*                           -  udp        0      0 0.0.0.0:10524           0.0.0.0:*                           -  udp6       0      0 :::21956                :::*                                -  Active UNIX domain sockets (only servers)  Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path  unix  2      [ ACC ]     STREAM     LISTENING     8754     -                   /var/run/dbus/system_bus_socket  unix  2      [ ACC ]     STREAM     LISTENING     52566    -                   /var/run/supervisor.sock.8446  unix  2      [ ACC ]     STREAM     LISTENING     6691     -                   @/com/ubuntu/upstart  unix  2      [ ACC ]     STREAM     LISTENING     9075     -                   /var/run/acpid.socket  unix  2      [ ACC ]     STREAM     LISTENING     35450    -                   /var/run/postgresql/.s.PGSQL.5432  unix  2      [ ACC ]     SEQPACKET  LISTENING     14550    -                   /run/udev/control  

Windows Cluster - Moving a node to different network

Posted: 16 Oct 2021 06:03 PM PDT

I have a 3 node SQL always on cluster running , is it possible to move a node to different network(subnet) without evicting & creating the node again.

Syncing a directory with an SVN repository

Posted: 16 Oct 2021 08:02 PM PDT

I need to create/update/delete files in a directory (and its subdirectories) every time an SVN repo is updated.

I was told this can be done writing a script which uses output from svnlook changed command.

I wonder: Is there an already written script for this?

Added: I think svnsync is not suitable for this, as it needs the synced repository to have .svn folders what is no good for us. (it was even more wrong: svnsync synchronized repos not working dirs, which I need to synchronize)

No comments:

Post a Comment