how could I add error handling to jump stopped server and continue running the script to the next instance Posted: 25 Sep 2021 09:51 PM PDT all: I have a script to use python winrm library to connect to each instance and run a local script on the instance to stop sql engine and sql agent services. There is an error handling problem here. When one of the servers is already powered off, the winrm.session would fail to make a connection. Then all the instances behind this stopped instance won't be considered for powering off. How could I add mechanism to disregard stopped ec2 instances and continue to the next instance on the list? The code is as below: try: print("Establishing connection to %s" %ip) s = winrm.Session(ip, auth=('{}@{}'.format(user,domain), password), transport='ntlm') print("Running shutdown script") r = s.run_ps('c:\shutdown_script.ps1') print(r.status_code) print(r.std_err) print(r.std_out) except winrm.exceptions.WinRMTransportError: print("You are getting a WinRMTransportError, the reason is unknown, please try again.") ``` The corresponding error for not able to make the connection is as below: ```[ERROR] ConnectTimeout: HTTPConnectionPool(host='10.251.3.188', port=5985): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f13e700a810>, 'Connection to 10.251.3.188 timed out. (connect timeout=120)')) Traceback (most recent call last): File "/var/task/lambda_function.py", line 249, in lambda_handler r = s.run_ps('c:\shutdown_script.ps1') File "/opt/python/winrm/__init__.py", line 52, in run_ps rs = self.run_cmd('powershell -encodedcommand {0}'.format(encoded_ps)) File "/opt/python/winrm/__init__.py", line 39, in run_cmd shell_id = self.protocol.open_shell() File "/opt/python/winrm/protocol.py", line 166, in open_shell res = self.send_message(xmltodict.unparse(req)) File "/opt/python/winrm/protocol.py", line 243, in send_message resp = self.transport.send_message(message) File "/opt/python/winrm/transport.py", line 321, in send_message self.build_session() File "/opt/python/winrm/transport.py", line 304, in build_session self.setup_encryption() File "/opt/python/winrm/transport.py", line 310, in setup_encryption self._send_message_request(prepared_request, '') File "/opt/python/winrm/transport.py", line 339, in _send_message_request response = self.session.send(prepared_request, timeout=self.read_timeout_sec) File "/opt/python/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/opt/python/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) How could I add a status code behind this statement "s = winrm.Session(ip, auth=('{}@{}'.format(user,domain), password), transport='ntlm'" to know if the connection has been established successfully. |
How do I create a python 3 service that uses socket with systemd? Posted: 25 Sep 2021 04:08 PM PDT I am trying to create a service with systemd, where I use python3 to create a simple socket and leave it as a daemon, but I have made several attempts but in both cases without any success. For today systemd has beaten me, but tomorrow is another day to try. Server import socket host = '127.0.0.1' port = 9999 BUFFER_SIZE = 1024 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as socket_tcp: socket_tcp.bind((host, port)) socket_tcp.listen(5) # Esperamos la conexión del cliente conn, addr = socket_tcp.accept() # Establecemos la conexión con el cliente with conn: print('[*] Conexión establecida') while True: # Recibimos bytes, convertimos en str data = conn.recv(BUFFER_SIZE) # Verificamos que hemos recibido datos if not data: break else: print('[*] Datos recibidos: {}'.format(data.decode('utf-8'))) conn.send(data) # Hacemos echo convirtiendo de nuevo a bytes Client import socket # El cliente debe tener las mismas especificaciones del servidor host = '127.0.0.1' port = 9999 BUFFER_SIZE = 1024 MESSAGE = 'Hola, mundo!' # Datos que queremos enviar with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as socket_tcp: socket_tcp.connect((host, port)) # Convertimos str a bytes socket_tcp.send(MESSAGE.encode('utf-8')) data = socket_tcp.recv(BUFFER_SIZE) Config Unit File sudo nano /etc/systemd/system/socket_prueba.service sudo rm -r /etc/systemd/system/socket_prueba.service [Unit] Description= Server Relay System: Manager After=multi-user.target [Service] Type=simple Restart=always ExecStart=/usr/local/bin/pipenv run python /path/test_server.py [Install] WantedBy=multi-user.target sudo systemctl daemon-reload sudo systemctl enable socket_prueba.service sudo systemctl start socket_prueba.service sudo systemctl status socket_prueba.service Result: ● socket_prueba.service - Server Relay System: Manager Loaded: loaded (/etc/systemd/system/socket_prueba.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sat 2021-09-25 16:07:17 -05; 58min ago Process: 25771 ExecStart=/usr/local/bin/pipenv run python /home/path> Main PID: 25771 (code=exited, status=2) sep 25 16:07:17 serversaas systemd[1]: socket_prueba.service: Scheduled restart job, restart counter is > sep 25 16:07:17 serversaas systemd[1]: Stopped Server Relay System: Manager. sep 25 16:07:17 serversaas systemd[1]: socket_prueba.service: Start request repeated too quickly. sep 25 16:07:17 serversaas systemd[1]: socket_prueba.service: Failed with result 'exit-code'. sep 25 16:07:17 serversaas systemd[1]: Failed to start Server Relay System: Manager. Intento 2 Sourcer: systemd and python socket con python y systemd ● socket_prueba.socket - Socket prueba Loaded: loaded (/etc/systemd/system/socket_prueba.socket; disabled; vendor preset: enabled) Active: failed (Result: service-start-limit-hit) since Sat 2021-09-25 17:00:47 -05; 4s ago Triggers: ● socket_prueba.service Listen: 127.0.0.1:9999 (Stream) sep 25 17:00:47 vidm-OMEN systemd[1]: Listening on Socket prueba. sep 25 17:00:47 vidm-OMEN systemd[1]: socket_prueba.socket: Failed with result 'service-start-limit-hit'. |
Programatically get full path to binary in powershell (which, where, Get-Command) Posted: 25 Sep 2021 03:26 PM PDT How do I get the absolute path to a given binary and store it to a variable? What is the equivalent to the following for Linux Bash in Windows Powershell? user@disp985:~$ path=`which gpg` user@disp985:~$ echo $path /usr/bin/gpg user@disp985:~$ user@disp985:~$ $path gpg: keybox '/home/user/.gnupg/pubring.kbx' created gpg: WARNING: no command supplied. Trying to guess what you mean ... gpg: Go ahead and type your message ... In Windows Powershell, there's Get-Command , but the output is hardly trivial to parse programmatically for a script. PS C:\Users\user> Get-Command gpg.exe CommandType Name Version Source ----------- ---- ------- ------ Application gpg.exe 2.2.28.... C:\Program Files (x86)\Gpg4win\..\GnuP... PS C:\Users\user> How can I programmatically determine the full path to a given binary in Windows Powershell, store it to a variable, and execute it? |
Exchange 2016 ActiveSync in Edge Server Posted: 25 Sep 2021 04:06 PM PDT Dears, We currently have an Exchange 2016 Edge server located in our DMZ and it is handling all our inbound and outbound emails. I noticed that there is no Activesync service/role inside Edge server. I can only see it inside our internal mailboxes servers which are located in our private, LAN, network. Is it possible to enable/install Activesync role inside Edge server? Because we do not want to expose our internal mailbox servers to the internet directly. Regards, |
Is the slow down due to traffic every day near 12 pm Posted: 25 Sep 2021 05:31 PM PDT I checked the slow logs and I got only 4 queries in 2 hours and all of them were similar to this: "SELECT HEX(uhash) AS uhash, vehid, IF(deleted = 0 AND follow_price_drop = 1, 1, 0) AS follow_price_drop, email, deleted FROM wp_ product_favorite_count AS cfc INNER JOIN wp_ product_favorite_user AS cfu ON cfc. product_favorite_user_uhash = cfu.uhash WHERE cfc.updated > '2021-09-23 12:49:02' OR cfu.updated > '2021-09-23 12:49:02'" I checked top and htop and I often get 100 cpu usage on all 6 cpu cores. Most of the CPU usage come from mysqld, so I logged the db: https://pastebin.com/BBv7ngW5 iostat -xm 5 3 gave me: avg-cpu: %user %nice %system %iowait %steal %idle 11.34 0.01 1.80 1.13 0.08 85.65 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util xvda 39.75 720.61 79.81 192.29 0.99 3.57 34.30 0.02 0.09 0.19 0.04 0.09 2.53 ^[[A^[[A^[[Aavg-cpu: %user %nice %system %iowait %steal %idle 84.15 0.00 6.16 0.05 0.03 9.61 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util xvda 0.80 31.00 14.40 19.80 0.65 0.20 50.95 0.02 0.73 0.93 0.58 0.43 1.48 ^[[A^[[Bavg-cpu: %user %nice %system %iowait %steal %idle 84.54 0.00 4.95 0.10 0.05 10.36 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util xvda 0.00 2.40 22.60 1.60 1.77 0.02 151.40 0.02 1.02 1.04 0.75 0.64 1.56 ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128341 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 128341 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I checked the general query log after checking the slow query log and was surprised that I got so many queries. When traffic is ordinary, I got: 136235 queries most of which are SELECT queries after 10 minutes. And when traffic is high, I got: 195650 queries in 10 minutes. I doubt it's 195650 visitors, but for some reason the calls are inside the general_log. The slow_query_log had only 4 queries and they didn't look like unoptimized queries. Is there anything else I should look at, or is this enough to surmise that it's from traffic and we should upgrade the server? top roughly look like this, I couldn't capture it in time, but when it reached 95%+ cpu, the screen looked like this: top - 13:04:51 up 1140 days, 19:59, 2 users, load average: 26.57, 16.21, 8.92 Tasks: 429 total, 12 running, 421 sleeping, 0 stopped, 0 zombie Cpu(s): 91.3%us, 1.6%sy, 0.0%ni, 65.7%id, 3.1%wa, 0.0%hi, 0.2%si, 0.1%st Mem: 32877280k total, 31367584k used, 1509696k free, 3960824k buffers Swap: 0k total, 0k used, 0k free, 3980580k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14576 mysql 20 0 12.9g 8.5g 8424 S 951.6 27.2 18841:47 mysqld 6032 martind 20 0 510m 65m 9160 S 61.4 0.2 2:49.40 php-fpm 7329 martind 20 0 498m 63m 5556 R 57.6 0.2 0:47.15 php-fpm 7321 martind 20 0 487m 52m 5532 R 46.1 0.2 0:45.18 php-fpm 7160 martind 20 0 488m 52m 5540 R 44.1 0.2 1:02.67 php-fpm 6031 martind 20 0 511m 67m 8076 S 42.2 0.2 2:50.87 php-fpm 6696 martind 20 0 498m 63m 5700 S 38.4 0.2 1:36.38 php-fpm 7283 martind 20 0 494m 59m 5268 S 34.5 0.2 0:46.19 php-fpm 7314 martind 20 0 490m 55m 5536 R 33.0 0.2 0:44.22 php-fpm 7330 martind 20 0 496m 60m 5436 R 26.4 0.2 0:46.82 php-fpm 7305 martind 20 0 494m 58m 5572 R 25.4 0.2 0:48.85 php-fpm 6706 martind 20 0 507m 62m 8060 S 13.7 0.2 1:40.55 php-fpm 7276 martind 20 0 498m 63m 5264 S 7.7 0.2 0:49.89 php-fpm 17464 redis 20 0 4328m 2.3g 888 R 7.7 7.3 7827:30 redis-server 6402 martind 20 0 511m 67m 8056 S 5.8 0.2 2:15.21 php-fpm 6405 martind 20 0 512m 69m 9204 S 5.8 0.2 2:14.32 php-fpm 6703 martind 20 0 513m 67m 8056 S 5.8 0.2 1:39.40 php-fpm 6705 martind 20 0 513m 68m 9040 S 5.8 0.2 1:36.18 php-fpm 7303 martind 20 0 493m 57m 6556 S 5.8 0.2 0:47.04 php-fpm 7304 martind 20 0 494m 59m 5264 S 5.8 0.2 0:48.70 php-fpm 7323 martind 20 0 511m 67m 7772 S 5.8 0.2 0:45.53 php-fpm 24515 nginx 20 0 123m 66m 2452 S 5.8 0.2 7231:17 nginx 6039 martind 20 0 507m 63m 8200 S 3.8 0.2 2:48.39 php-fpm 6400 martind 20 0 511m 68m 8204 S 3.8 0.2 2:13.54 php-fpm 6401 martind 20 0 510m 66m 9052 S 3.8 0.2 2:13.36 php-fpm 6404 martind 20 0 512m 68m 9048 S 3.8 0.2 2:12.75 php-fpm So because there are so many SQL queries when it tends to slow down a lot, I am thinking it's caused by a high traffic. I checked the cronjobs (wordpress cronjobs and php cronjobs) and nothing seems to run when it slows, there might be a rsync process running at the same time, but the rsync process runs at all time, so I doubt it's caused by this. Is there anything I can check? |
What firewall changes are required to download Windows updates? Posted: 25 Sep 2021 04:11 PM PDT I have a Windows 2019 server sitting behind a firewall and an Internet proxy. What are the network ports and URLs that need to be whitelisted for this server to download Windows patches/updates from the Internet? |
Apache is using all space on main ssd [closed] Posted: 25 Sep 2021 03:07 PM PDT I have my own nextcloud Server. The /var/www/html folder is /dev/sdb2 mounted on it /dev/sdb2 is my hdd Newly after some days without a reboot the server reports: couldn't complete request When I look at it there's no free space on /dev/sda5 (main ssd). After a reboot a whole 20GB is freed I have redis, acpu and opcache Once I had to stop the apache server (I removed as much as I can from /dev/sda5 but the apache server filled all up again instantly), a reboot fixed it I am using Ubuntu 21.04 Why is Apache using all space of /dev/sda? |
How to limit users to their home directory SFTP on Ubuntu 20.04 Posted: 25 Sep 2021 04:08 PM PDT Is there any way to limit users to certain home directories with sftp on Ubuntu OS 20.04 and giving them write access to their directory?' I was able to keep each user to only see their directory and not browse any other directories. However, they can't write to their directory now. Below is the config for /etc/ssh/sshd_config Match User XXXXXXXXX ForceCommand internal-sftp PasswordAuthentication yes ChrootDirectory %h PermitTunnel no AllowAgentForwarding no AllowTcpForwarding no X11Forwarding no Thanks in advance for any help! |
Sensible Google Cloud security for Google Maps API project? Posted: 25 Sep 2021 04:03 PM PDT Sensible security for Google Maps API Google Maps API is now only available through Google Cloud Platform. I am working on a small project and I'm not sure what would be sensible for me to do in terms of security. I am the developer and my client, who is not very technical, is the owner of the GCP account controlling the billing. I have tried to understand the way IAM is set up in GCP from the docs, done searches and read several chapters of books on O'Reilly, but I'm still unclear what would be good practice (without getting too complicated, and ideally just using the Console) to protect the two GCP accounts we need. What I have done is: - Set up a new Google user account for myself specifically for GCP because my main Google account is tied to so many other services which I think potentially makes security weaker for GCP;
- Secured the two user accounts for GCP with 2 factor authentication;
- Created a project - for which I then became the
owner (which I understand is not ideal because it gives very broad access to resources); - Invited my client to sign up to GCP initially as the
owner ; - Changed my client's roles to
ApiGateway Admin and Project Billing Manager . I am proposing to change my roles to include Project IAM Admin and ApiGateway Admin , and afterwards to remove owner . Then I think I should be able to manage Google Maps APIs and also add new roles to the project if I want to. My questions Would those proposed roles be sufficient for me to manage Google Maps APIs for the project? I haven't got as far as managing API keys, but I will follow the official guidance on Google Maps pages for that. Is there anything else I can advise my client to do to protect his GCP account from someone who might gain access to his account and try to add other services? For instance, would it help if he or I set up an organization or a folder structure? New tag google-cloud-iam included Looking at Google support for IAM, I was advised to post with this tag on StackOverflow, which I did. My question was then marked 'off topic' and I was advised to post on Super User. It was marked 'off topic' there too, so I am now trying to post it here. google-cloud-iam appears to be Google's recommended tag; please would someone from the community with more authority add it (I only found google-iam here). |
Wireguard iface - icmp6 replys from lo interface Posted: 25 Sep 2021 03:24 PM PDT I use wireguard between 2 Debian11 and face a problem: traffic goes in to wireguard interface/ip address but goes out with lo as interface with the right ip address 19:23:50.287492 wig0 In IP6 fd99:1234:beef:cafe:fade::7000 > fd99:1234:beef:cafe:fade::7fff: ICMP6, echo request, id 18272, seq 5, length 64 19:23:50.287509 lo In IP6 fd99:1234:beef:cafe:fade::7fff > fd99:1234:beef:cafe:fade::7000: ICMP6, echo reply, id 18272, seq 5, length 64 Any clue on this ? Thanks for your support Daniel |
Can I install php8.0 on Ubutnu 16.04 Posted: 25 Sep 2021 08:02 PM PDT I'm trying to install php8 on Ubuntu 16.04.5 LTS. Here is the list of commands and output that I ran: apt-get update apt install software-properties-common --- these commands went fine when ran add-apt-repository ppa:ondrej/php output: Co-installable PHP versions: PHP 5.6, PHP 7.x and most requested extensions are included. Only Supported Versions of PHP (http://php.net/supported-versions.php) for Supported Ubuntu Releases (https://wiki.ubuntu.com/Releases) are provided. Don't ask for end-of-life PHP versions or Ubuntu release, they won't be provided. Debian oldstable and stable packages are provided as well: https://deb.sury.org/#debian-dpa You can get more information about the packages at https://deb.sury.org IMPORTANT: The <foo>-backports is now required on older Ubuntu releases. BUGS&FEATURES: This PPA now has a issue tracker: https://deb.sury.org/#bug-reporting CAVEATS: 1. If you are using php-gearman, you need to add ppa:ondrej/pkg-gearman 2. If you are using apache2, you are advised to add ppa:ondrej/apache2 3. If you are using nginx, you are advised to add ppa:ondrej/nginx-mainline or ppa:ondrej/nginx PLEASE READ: If you like my work and want to give me a little motivation, please consider donating regularly: https://donate.sury.org/ WARNING: add-apt-repository is broken with non-UTF-8 locales, see https://github.com/oerdnj/deb.sury.org/issues/56 for workaround: # LC_ALL=C.UTF-8 add-apt-repository ppa:ondrej/php More info: https://launchpad.net/~ondrej/+archive/ubuntu/php Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmpw60tb7ap/secring.gpg' created gpg: keyring `/tmp/tmpw60tb7ap/pubring.gpg' created gpg: requesting key E5267A6C from hkp server keyserver.ubuntu.com gpg: /tmp/tmpw60tb7ap/trustdb.gpg: trustdb created gpg: key E5267A6C: public key "Launchpad PPA for Ondřej Surý" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK apt update output: Hit:1 http://ppa.launchpad.net/nginx/stable/ubuntu xenial InRelease Hit:2 http://ppa.launchpad.net/ondrej/php/ubuntu xenial InRelease Hit:3 http://eu-west-2.ec2.archive.ubuntu.com/ubuntu xenial InRelease Hit:4 http://eu-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:5 http://eu-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease Ign:6 https://packages.sury.org/php xenial InRelease Err:7 https://packages.sury.org/php xenial Release 404 Not Found Get:8 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB] Hit:9 http://repo.zabbix.com/zabbix/3.4/ubuntu xenial InRelease Hit:10 https://packagecloud.io/phalcon/stable/ubuntu xenial InRelease Reading package lists... Done E: The repository 'https://packages.sury.org/php xenial Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. After that while, I tried to install php8.0-fpm I got the following: apt install php8-fpm Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php8-fpm Also: apt install php8.0-fpm Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php8.0-fpm E: Couldn't find any package by glob 'php8.0-fpm' E: Couldn't find any package by regex 'php8.0-fpm' |
How to configure the AT&T (Arris) BGW-210 router for IP Passthrough using static IP(s) and pointing to UniFi Dream Machine Pro? Posted: 25 Sep 2021 09:08 PM PDT We are setting up AT&T fiber internet with 5 usable static IPs and the Ubiquity UniFi Dream Machine Pro (UDM-Pro). I would like to configure the BGW-210 to act as a bridge to the UDM-Pro. I found this article on how to configure the BGW-210 in IP Passthrough mode (similar to bridge), but some of the details are a bit unclear and I need to adjust this setup process to use one or more of my static IP addresses on the UDM-Pro. In one paragraph, the article said DHCP is not needed for Passthrough mode: The DHCP Server option can be turned off if you're doing IP Passthrough, but you must leave it on if you are doing Default Server... But later on it said that you are still using DHCP: It is worth mentioning that this is still a DHCP address that your internal device is getting... Which leaves some confusion on whether or not DHCP server should be configured or disabled. Here are the things I'm fairly certain of: - Set the "Public LAN Subnet" different than the UDM-Pro LAN subnet.
- Setup the IP addresses provided by AT&T under the "Public Subnet" section. I did this and we can connect to the Internet.
- I need to enable "Allocation Mode" to
Passthrough . - I need to set the "Passthrough Mode" to
DHCPS-fixed . - I need to enter the MAC address of the UDM-Pro in "Passthrough Fixed Mac Address".
- I need to setup the UDM-Pro to get its WAN address from a DHCP server.
What I'm unclear about is: - Under "Public Subnet" section, do I leave "Public Subnet Mode"
On and "Allow Inbound Traffic" Off ? - Do I leave "DHCP Server Enable"
On and what IP address ranges should be there? The author of the post seems to mix the Default Server instructions with the Passthrough instructions. - After putting the BGW-210 in Passthrough mode, do I still need to turn off packet filtering and firewall features or does Passthrough mode bypass these automatically?
Again, the goal is to "bridge" the AT&T router and have the UDM-Pro manage all routing and security. Thank you. |
What is service "ismserver" listening on port 9500? Posted: 25 Sep 2021 07:02 PM PDT I am on Ubuntu 16.04 Upon nmap -sV localhost , I came upon the output line 9500/tcp filtered ismserver which I cannot recognize. The service ismserver listening on the port matches the IANA record, as seen on http://www.adminsub.net/tcp-udp-port-finder/ismserver That nmap reports the port as filtered probably also reflects the fact that I have denied incoming traffic on that port. Yet, I am interested in what exactly is the service listening on the port, what it is doing, and how to disable it if possible. May I ask if anyone might have some ideas? Thanks heaps! PS locate ismserver returns no result.
Found something with similar name, but unsure if relevant: http://www.netplex-tech.com/docs/netvigil/v3.6/NetVigil%20Manual%20v3.6-19-1.html |
Failed to fully start up daemon: Connection timed out Posted: 25 Sep 2021 07:02 PM PDT After a few Ubuntu updates this started happening. Whenever logging into this server, either by using the LDAP client or a local user, it takes a long time to get authenticated and log in. /var/log/auth.log: Jan 14 06:19:16 norwich systemd-logind[18114]: Failed to fully start up daemon: Connection timed out Jan 14 06:19:41 norwich systemd-logind[18225]: Failed to enable subscription: Connection timed out Jan 14 06:19:41 norwich systemd-logind[18225]: Failed to fully start up daemon: Connection timed out Jan 14 06:19:41 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 14 06:20:06 norwich systemd-logind[18329]: Failed to enable subscription: Connection timed out Jan 14 06:20:06 norwich systemd-logind[18329]: Failed to fully start up daemon: Connection timed out Jan 14 06:20:06 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 14 06:20:31 norwich systemd-logind[18441]: Failed to enable subscription: Connection timed out Jan 14 06:20:31 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 14 06:20:31 norwich systemd-logind[18441]: Failed to fully start up daemon: Connection timed out Jan 14 06:20:56 norwich systemd-logind[18552]: Failed to enable subscription: Connection timed out Jan 14 06:20:56 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 14 06:20:56 norwich systemd-logind[18552]: Failed to fully start up daemon: Connection timed out Jan 14 06:21:21 norwich systemd-logind[18665]: Failed to enable subscription: Connection timed out Jan 14 06:21:21 norwich systemd-logind[18665]: Failed to fully start up daemon: Connection timed out Jan 14 06:21:21 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out The server is using Ubuntu 16.04 LTS and: Linux norwich 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux I have already restarted the server with no effect. Here's the journal log: Jan 15 13:01:03 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 15 13:01:03 norwich systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:01:03 norwich systemd[1]: Failed to start Login Service. Jan 15 13:01:03 norwich systemd[1]: systemd-logind.service: Unit entered failed state. Jan 15 13:01:03 norwich systemd[1]: systemd-logind.service: Failed with result 'exit-code'. Jan 15 13:01:03 norwich systemd[1]: systemd-logind.service: Service has no hold-off time, scheduling restart. Jan 15 13:01:03 norwich systemd[1]: Stopped Login Service. Jan 15 13:01:03 norwich systemd[1]: Starting Login Service... Jan 15 13:01:03 norwich systemd[1]: Failed to forward Released message: No buffer space available Jan 15 13:01:03 norwich systemd[1]: Failed to forward Released message: No buffer space available Jan 15 13:01:28 norwich systemd-logind[11142]: Failed to enable subscription: Connection timed out Jan 15 13:01:28 norwich systemd-logind[11142]: Failed to fully start up daemon: Connection timed out Jan 15 13:01:28 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 15 13:01:28 norwich systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:01:28 norwich systemd[1]: Failed to start Login Service. Jan 15 13:01:28 norwich systemd[1]: systemd-logind.service: Unit entered failed state. Jan 15 13:01:28 norwich systemd[1]: systemd-logind.service: Failed with result 'exit-code'. Jan 15 13:01:28 norwich systemd[1]: systemd-logind.service: Service has no hold-off time, scheduling restart. Jan 15 13:01:28 norwich systemd[1]: Stopped Login Service. Jan 15 13:01:28 norwich systemd[1]: Starting Login Service... Jan 15 13:01:28 norwich systemd[1]: Failed to forward Released message: No buffer space available Jan 15 13:01:53 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 15 13:01:53 norwich systemd-logind[11146]: Failed to enable subscription: Failed to activate service 'org.freedesktop.systemd1': timed out Jan 15 13:01:53 norwich systemd-logind[11146]: Failed to fully start up daemon: Connection timed out Jan 15 13:01:53 norwich systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:01:53 norwich systemd[1]: Failed to start Login Service. Jan 15 13:01:53 norwich systemd[1]: systemd-logind.service: Unit entered failed state. Jan 15 13:01:53 norwich systemd[1]: systemd-logind.service: Failed with result 'exit-code'. Jan 15 13:01:53 norwich systemd[1]: systemd-logind.service: Service has no hold-off time, scheduling restart. Jan 15 13:01:53 norwich systemd[1]: Stopped Login Service. Jan 15 13:01:53 norwich systemd[1]: Starting Login Service... Jan 15 13:01:53 norwich systemd[1]: Failed to forward Released message: No buffer space available Jan 15 13:02:18 norwich systemd-logind[11150]: Failed to enable subscription: Connection timed out Jan 15 13:02:18 norwich dbus[929]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out Jan 15 13:02:18 norwich systemd-logind[11150]: Failed to fully start up daemon: Connection timed out Jan 15 13:02:18 norwich systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE Jan 15 13:02:18 norwich systemd[1]: Failed to start Login Service. |
Group policy not updating on Windows Server 2012 R2 Posted: 25 Sep 2021 08:02 PM PDT I have a small solution for one of our customers. There are 3 servers, a Domain Controller, Session Host Server and a Sage Server. Just recently we've noticed that group policies are no longer applying to users logging in to the Session Host server. As such, profile redirection isn't occuring which is causing a whole host of other issues. If I run gpupdate /force I get the following response... The processing of Group Policy failed. Windows could not authenticate to the Active Directory service on a domain controller. (LDAP Bind function call failed). Look in the details tab for error code and description. A quick look in the system error logs shows me kerberos errors all over the place... The event 1006 I can see seems to show 'Invalid Credentials'... Although I'm unsure what credentials it's complaining about. I've checked with my colleagues and nobody is owning up to having changed anything, so i'm stumped as to what has caused this. Can anyone point me in the right direction? |
HTTPS/SSL not working with Cups Posted: 25 Sep 2021 10:02 PM PDT I have a site mysite.com and a printer mysite.com:631/printers/myprinter Well, https:// www.mysite.com works perfectly and is validated, but https:// www.mysite.com:631/printers/myprinter won't. It says that the connection is not secured and I cannot install the printer. This is what I have in conf.d in my CUPS server: DefaultEncryption Required ServerCertificate /etc/letsencrypt/live/mysite/cert.pem ServerKey /etc/letsencrypt/live/mysite/privkey.pem SSLPort 443 Any help making the printer and the site https? |
nohup daemon process dies when terminal is close Posted: 25 Sep 2021 10:04 PM PDT I'm trying to copy the contents of a remote server to my local computer. Here is the command that I'm using: nohup rsync -chavP -e 'ssh -p 23' --stats user@ipaddress:~/great/stuff /volume1/code > ~/rsynclog & My understanding of the above command is that it should be creating a daemon process (nohup & ) which is wholly disconnected from my terminal session. As such, I should be able to safely close my terminal and have the process continue on its merry way. The process does start as expected. However, if I close my terminal session, the process no longer shows up in ps aux | grep rsync . The process has obviously died as you can see in the rsynclog: receiving incremental file list Killed by signal 1. rsync error: unexplained error (code 255) at rsync.c(615) [Receiver=3.0.9] This is happening as a direct result of my closing the terminal. If I leave the terminal open for hours then rsync works fine. As soon as I close it, however, the process dies. Does anyone know enough about nohup to explain why it doesn't seem to be working here? I'm creating the terminal using PuTTY in Windows , but I also tried it through a terminal on my ubuntu machine and got the same result. I tried a simpler test with nohup sleep 100000 & and this DID work! However, it is unclear to me why that daemon keeps living while rsync dies... |
Windows 2016 DNS Server: not using forwarder when recursively resolving CNAME in delegated zone? Posted: 25 Sep 2021 06:01 PM PDT I don't think I'm going mad here... Our AD domain controllers (Server 2016) are the DNS servers for foo.example . Within that, we have a delegation, r53.foo.example , which points out to the nameservers for that zone in Amazon Route 53. One of the records in the Route 53 zone is a CNAME to an EC2 instance's public DNS name, i.e. bar.r53.foo.example IN A ec2-1-2-3-4.us-west-1.compute.amazonaws.com. The Windows DNS server is set to use Google public DNS servers as forwarders, and root hints are disabled. Recursion is enabled. From a client, if I query ec2-1-2-3-4.us-west-1.compute.amazonaws.com , it resolves correctly. Then, clear all the DNS caches. If I now query bar.r53.foo.example , the Windows DNS server will query the delegated zone's DNS server (because of the delegation), and get the CNAME result, but that upstream server doesn't recursively resolve the A record. Windows then sends an A record query to the delegated zone's nameserver - and not the NS for us-west-1.compute.amazonaws.com , and gets a REFUSED response. I would have expected it to either use the configured forwarders (because ec2-1-2-3-4.us-west-1.compute.amazonaws.com is not in a zone it hosts authoritatively nor a delegated zone), or to at least recursively resolve using the NS for us-west-1.compute.amazonaws.com . Instead, it leaves clients without a full resolution. If the ec2-1-2-3-4.us-west-1.compute.amazonaws.com IN A 1.2.3.4 record happens to already be in the server's cache, then the client query resolves completely, but obviously this isn't guaranteed. This smells like a bug, but maybe I'm missing something? Edit to add: this is only true under Server 2016 DNS server. Same config under 2012 R2 gives the expected behaviour. |
How to tune apache with mpm_event Posted: 25 Sep 2021 10:04 PM PDT regarding apache tuning, there are lots of good documents and posts on the web eg. How To Tune Apache on Ubuntu 14.04 Server unfortunately almost all of them describe how to tune apache with mpm_prefork. as I understand though, prefork method is a bit old and lacks efficiency in multi threading and ... anyway my client insists on using mpm_event, and their main problem is with a few number of visitors their VPS become unresponsive, and I can see that the problem is with memory as their ram gets full, it gets to use the swap, swap gets full and the only way to restore the server would be restarting httpd service, or god forbid, restarting server itself! The VPS has 2GB of ram while it servers as webserver, mailserver and MySql is also installed on it. the php.ini file is allowing max_memory=140M (which i think is a bit high?!) but for now they really need this amount. with a quick calculation about other tasks that VPS is doing, I think it is safe to assign 1GB of memory to apache. But I can not find any method on the web to calculate the settings for mpm_event to limit it's usage to this 1GB max. any help would be appreciated on this calculation needed info from comments: ps -ef | grep php => nothing phpinfo() from Apache => Server Api : CGI/FastCGI apachectl -M => Loaded Modules: core_module (static) authn_file_module (static) authn_dbm_module (static) authn_anon_module (static) authn_dbd_module (static) authn_socache_module (static) authn_core_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_dbm_module (static) authz_owner_module (static) authz_dbd_module (static) authz_core_module (static) access_compat_module (static) auth_basic_module (static) auth_form_module (static) auth_digest_module (static) allowmethods_module (static) file_cache_module (static) cache_module (static) cache_disk_module (static) cache_socache_module (static) socache_shmcb_module (static) socache_dbm_module (static) socache_memcache_module (static) so_module (static) macro_module (static) dbd_module (static) dumpio_module (static) buffer_module (static) ratelimit_module (static) reqtimeout_module (static) ext_filter_module (static) request_module (static) include_module (static) filter_module (static) substitute_module (static) sed_module (static) deflate_module (static) http_module (static) mime_module (static) log_config_module (static) log_debug_module (static) logio_module (static) env_module (static) expires_module (static) headers_module (static) unique_id_module (static) setenvif_module (static) version_module (static) remoteip_module (static) proxy_module (static) proxy_connect_module (static) proxy_ftp_module (static) proxy_http_module (static) proxy_fcgi_module (static) proxy_scgi_module (static) proxy_wstunnel_module (static) proxy_ajp_module (static) proxy_balancer_module (static) proxy_express_module (static) session_module (static) session_cookie_module (static) session_dbd_module (static) slotmem_shm_module (static) ssl_module (static) lbmethod_byrequests_module (static) lbmethod_bytraffic_module (static) lbmethod_bybusyness_module (static) lbmethod_heartbeat_module (static) unixd_module (static) dav_module (static) status_module (static) autoindex_module (static) info_module (static) suexec_module (static) cgi_module (static) dav_fs_module (static) dav_lock_module (static) vhost_alias_module (static) negotiation_module (static) dir_module (static) actions_module (static) speling_module (static) userdir_module (static) alias_module (static) rewrite_module (static) suphp_module (shared) ruid2_module (shared) mpm_event_module (shared) |
Apache log : 'myipha.php not found or unable to stat' and others Posted: 25 Sep 2021 09:08 PM PDT I recently get in my apache error.log a long set of : [Thu Jun 30 13:03:22.005214 2016] [:error] [pid 15759] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:27.164111 2016] [:error] [pid 15644] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:32.314190 2016] [:error] [pid 15757] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:37.462354 2016] [:error] [pid 15514] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:42.559487 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:42.606343 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:42.653108 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myipha.php' not found or unable to stat [Thu Jun 30 13:03:42.699306 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphc.php' not found or unable to stat The PID belongs to apache. I couldn't find much on the internet about it. Anyone knows what it is ? |
nginx: How do I redirect all requests to the frontpage? Posted: 25 Sep 2021 04:01 PM PDT I am shutting down a website and am stumped on a really simple problem; I need to redirect all URLs to the homepage (except, obviously, the homepage itself). I thought this should work, but it causes the homepage to get in a loop. rewrite ^/.+ $scheme://$server_name permanent; I thought that would have meant any URL containing 1 or more characters. (Homepage obviously containing none). |
Redhat: Enabling proxy for specific IP addresses only Posted: 25 Sep 2021 05:06 PM PDT I have a server that ordinarily does not require a proxy to connect (e.g.) to any URL on the internet. However some specific systems (IP addresses) I need to connect to via a specific proxy. My question is how I can specify the proxy per IP-Address / subnet without disrupting the normal configuration where no proxy is required. I am aware of the following variables that can be set, however this operates on an exclusion basis to access the proxy rather than an inclusion basis: export http_proxy=http://my-proxy.tld:8080 export https_proxy=https://my-proxy.tld:8080 export no_proxy=127.0.0.1,localhost,192.168.1.123, ... The server in question is a RedHat V7.1 box. Any ideas will be much appreciated. Thanks in advance Alex |
Mysql doesn't start mysqld.sock is missing Posted: 25 Sep 2021 03:27 PM PDT After I rebooted my server, everything started up normally except for "mysql", i tried to start it up manually "/etc/init.d/mysql restart " or with "service mysql restart ", it fails In log file it says: Jul 16 08:13:38 localhost /etc/init.d/mysql[18136]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Jul 16 08:13:38 localhost /etc/init.d/mysql[18136]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! I checked that path, but i didn't find that socket file, please advise? |
Cobbler with puppet - slow in installing packages Posted: 25 Sep 2021 03:06 PM PDT I have setup cobbler that runs puppet automatically after OS installation. It works fine, except that it takes a while for puppet to install all of the packages. This is still a testing phase, so it was just around 15 packages in total. My site.pp looks like this: node server1 { include myrepo include bacula include vsftpd } myrepo is just copying a repository file to the client server. bacula lists 11 packages to be installed, and vsftpd only 1 package. Right after the OS (Suse) was installed, the repository file was copied over, vsftpd was installed, but only 2 bacula packages were installed in the client server. I had to wait for like half an hour or so before all of the bacula packages got installed. Puppet log is empty. If I were to run puppet manually, the installation of the packages will run smoothly and fast. What could be the reason of the delay? |
Can I delete DLL from windows/temp folder Posted: 25 Sep 2021 06:01 PM PDT This server is running in production, has IIS and is a utility server OS is 2008 ent. there are 11GB of dll files at 12kb a piece in the Windows/Temp Folder. Is it safe to delete these. Would putting these inside of a sub folder test if deleting them would break anything. I think these are being created by IIS. Would I need to unregister these dll's because I don't know if they are registered or not, and I haven't found a rock solid way to identify weather they are registered or not, nor have I found a way of determining their dependencies. Ive read in other forums that I can just delete them, yet I also find some where people are missing a dll that they need. so I am just trying to make sure I am not in the latter category Thank you for your help I've used process explorer and referenced the name of one of the recent dll's that was created and it pointed me to wpw3.exe and I found out which application pool is responsible. It is from a vendor. |
Installation failure in MS-exchange server 2013 in step 8 Posted: 25 Sep 2021 05:06 PM PDT Exchange server 2013 installation i got following error message, i configure active directory and domain and assign roles for them. every time at 97% in step 8 i got this. Error: The following error was generated when "$error.Clear(); if ($RoleStartTransportService) { start-SetupService -ServiceName FMS } " was run: "Service 'FMS' failed to reach status 'Running' on this server.". |
Null / blank values on puppet facts Posted: 25 Sep 2021 03:06 PM PDT How can I quickly and easily state that a null / blank value is OK on a fact within puppet? When assembling a custom fact I'm doing something like the following: /puppet/production/modules/hosts/lib/facter Facter.add(:hostslocal) do setcode do Facter::Util::Resolution.exec("cat /etc/hosts.local 2> /dev/null") end end This works perfectly except if the file doesn't exist, in which case it will blow up with something like the following when used in Puppet. Detail: Could not find value for 'hostslocal' I've been able to work around it with something akin to 'if file not exists write a line that contains only a comment' but that seems kludgy. |
How to set some nodes on pause/hold/noop? Posted: 25 Sep 2021 03:06 PM PDT is there an option to set a node on hold (set it on noop), on the puppetmaster side? i am looking for a setting like in the example below. every node which has a variable called noop set to true in it's definition, won't be updated. so example1.node.com will not have the test1 file after multiple puppet client runs but example2.node.com will have the file. is there such an option? does another approach exist? (of course i can simple add a "_" to the node name and it will stop matching. i am looking for the official way. node "example1.node.com" { $noop = true file { "/root/test1": content => "test", ensure => present, } } node "example2.node.com" { file { "/root/test1": content => "test", ensure => present, } } |
Internal Server Error (Timeou waiting for output from CGI) Posted: 25 Sep 2021 04:01 PM PDT All, I'll admit right away that I'm not very familiar with the server side of things, just FYI, so I'm not sure how to debug this error. I moved my website from a Windows platform to an Linux platform and a new VPS. I'm getting an "Internal Sever Error" every once in awhile and I'm not sure why. When I look at the server logs I'm seeing this: "Timeout waiting for output from CGI script /var/www/cgi-bin/cgi_wrapper/cgi_wrapper" I'm running WordPress for my sites. Can someone tell me how to debug this error? Sometimes the site works fine which is really strange. Any help would be great. |
No comments:
Post a Comment