Linux Gateway Policy Routing and TCP MSS Issue(maybe)? Posted: 10 Apr 2022 02:57 AM PDT I have an Ubuntu 20 machine as an internet gateway with two WANs ens160 and ens192. I switch the default route on the gateway like ip r re 0/0 dev160(or ens192) and the clients on the LAN access the web without any problem. But if I want to let a specific user through the specified WAN, such as ip ru add from 192.168.3.60 lookup 100 ip r a 0/0 dev ens192 t 100 client 192.168.3.60 has an issue accessing some web resources like images, looks like a TCP MSS problem, but iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -o ens192 -j TCPMSS --clamp-mss-to-pmtu can't fix it. Can anyone offer some help? |
Who is the manufacturer of a RAID card LSI 9207-8i? Posted: 10 Apr 2022 02:33 AM PDT I've been trying to set up an older IBM server, that has 3 RAID controllers supported ServeRAID H1110, M1115, and M5110. I've found a controller on ebay, that was advertised as IBM 5110 IT Mode FW: P20 LSI 9207-8i. But since i've not come accross IT or IR mode before, only now i realised that it is actually HBA not RAID board. But official IBM/Lenovo specifications lists previously mentioned RAID controllers that were supposed to support RAID mode. What is confusing to me is, who's hardware and software this actually is? Are those controllers actually produced by IBM? So if a seller is advertising IBM M5110 controller, but adds LSI IT mode FW, is that still IBM controller? The whole manufacturer/FW provider is really confusing here. I got, as advertised, IBM controller with LSI firmware, when in BIOS firmware it is named Avago Technologies and utilities for this board are available on a Broadcom website. I actually don't know what card do i have, IBM or LSI? And if LSI, what has the denotation IBM M5110 to do with this card then? My intention is actually to flash the card from IT to IR mode, since i relied only on the information that it is IBM ServeRAID M5110. Thanks you for all replies! |
Associating an IP with an amplify app's custom domain Posted: 10 Apr 2022 02:25 AM PDT I have an aws amplify app with a custom root domain. The domain owner (managed on godaddy) wants to modify the godaddy DNS records to forward traffic to the amplify app. Since the custom domain is a root domain, they can't put a the url generated by amplify in their @ DNS record, and need an IP instead. Is there a way to associate an (elastic?) IP with an amplify app? Thank you |
stuck on "update-initramfs" Posted: 09 Apr 2022 10:37 PM PDT I was removing a package and freeze on this step: root@elk:/home/elk# apt remove metricbeat Reading package lists... Done Building dependency tree Reading state information... Done Package 'metricbeat' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 183 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-5.4.0-92-generic (5.4.0-92.103) ... Processing triggers for linux-image-5.4.0-92-generic (5.4.0-92.103) ... /etc/kernel/postinst.d/initramfs-tools: update-initramfs: Generating /boot/initrd.img-5.4.0-92-generic Progress: [ 67%] [###################################.........................................] Is there any idea why it happened? |
Cannot disable cipher suite "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384" in IIS / Windows Server 2022 Posted: 09 Apr 2022 10:24 PM PDT I am using this command in Windows Server 2022, latest updates: Disable-TlsCipherSuite -Name "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384" It completes without error. I then tried restarting IIS (and also the server). But this cipher suite still shows up in SSL Labs. Is this suite part of the suite named "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"? Is that why it can't be turned off? Any guidance would be greatly appreciated. |
Why does 'dig <domain.com> ANY' time out? Posted: 09 Apr 2022 11:21 PM PDT I've tried to query for 'any' records using dig, and I almost always get a timeout error. but when i google for why this is the case, there doesn't seem to be any answer at all. psmith@pop-os:~$ dig +short google.com 142.251.35.174 psmith@pop-os:~$ dig +short google.com any ;; connection timed out; no servers could be reached thanks. |
Can't open /usr/lib/ssl/openssl.cnf Posted: 09 Apr 2022 10:56 PM PDT Create key and certificate: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt Can't open /usr/lib/ssl/openssl.cnf for reading, No such file or directory 140713226073408:error:02001002:system library:fopen:No such file or directory:../crypto/bio/bss_file.c:69:fopen('/usr/lib/ssl/openssl.cnf','r') 140713226073408:error:2006D080:BIO routines:BIO_new_file:no such file:../crypto/bio/bss_file.c:76: Reinstall it with: apt install openssl --reinstall Check the file with: ls -al /usr/lib/ssl/openssl.cnf lrwxrwxrwx 1 root root 20 Mar 19 02:25 /usr/lib/ssl/openssl.cnf -> /etc/ssl/openssl.cnf ls -al /etc/ssl/openssl.cnf ls: cannot access '/etc/ssl/openssl.cnf': No such file or directory How can solve the issue then? |
"Open" port is not really open Posted: 09 Apr 2022 11:26 PM PDT I used the following command to open port 2022 in Ubuntu: sudo iptables -A INPUT -p tcp --dport 2022 -j ACCEPT But if I start a web server on port 2022 it is not reachable. If I run the command to see open ports: sudo iptables -vnL The output starts with Now, ports that show up there like 3000 or 4000 work, but 2022 does not. What's going on and how can I enable port 2022? |
Force nginx reverse proxy to resolve AAAA record only (and ignore A record) of a domain? Posted: 09 Apr 2022 11:12 PM PDT I am trying to use nginx on my VPS (mydoamin.com with A and AAAA record) with public static IP address (both IPv4 and IPv6) to create an IPv4-to-IPv6 proxy to make a IPv6-only home server reachable from normal IPv4 networks. The A record of the domain points to the nginx server and AAAA record to the IPv6-only home server directly. My current nginx server block looks like this: server { listen 443 ssl http2; server_name mydomain.com; ssl_certificate /etc/letsencrypt/live/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/key.pem; ssl_trusted_certificate /etc/letsencrypt/live/ca.pem; location / { proxy_pass https://[IPv6 address of the home server]; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } So far the proxy works quite good and all traffic via mydomain.com from IPv4 will be proxied by nginx but I am wondering is there a way to use the domain instead of the IPv6 address in the proxy_pass while nginx will only resolve its AAAA record? Many thanks in advance! |
asterisk - what's difference between format_* and codec_*? Posted: 09 Apr 2022 11:23 PM PDT When building Asterisk, there is two different entities related to audio formats - formats and codecs. What's difference? For example, I need a support only for alaw+ulaw+g.729 on voice traffic itself, and only plain wav + mp3 for announces and other sounds played to subscribers. What modules should I enable? Thanks. |
Access to VM using KVM bridged network interface inside company network not working outside of room LAN Posted: 10 Apr 2022 03:52 AM PDT I work at an institute so as you can imagine we have the typical proxy, AD, DNS and so on in place. My group is looking into transitioning to Linux for our servers (containing several GPUs for various research purposes) due to easier setup of our environments (machine learning, rendering) and access to tools required for our work. But before we do that I took it upon myself (with permission from our IT admin and my boss) to create a simple setup that will give us some perspective on what we need exactly. I use a normal desktop PC with Ubuntu Server 20.04 LTS set as the host. For managing it I use SSH (via Putty) or web interface (cockpit). On top of the host I run KVM with a bunch of QEMU VMs all of which are sharing the same bridged network interface (only a single VM may run at a time due to PCI passthrouhg in place). The host as well as the VMs have hostnames following the scheme <hostname>.<domain> Since our IT guy has been an outspoken anti-Linux driving force in our department every machine in the department that runs Linux has to be administered by the employee, who "owns" it. We are not given any help from the admin if we have an issue regarding those machines. Linux machines also need to have a -L suffix so that he knows they are not his problem. In general the host name for PCs and notebooks follows the scheme <department>-<machine type><3 digit numeric value> so the complete will be <department>-<machine type><3 digit numeric value>.<domain> Let's say I have the following with foo.bar.com being <domain> : - ABC-DT001-L.foo.bar.com - PC, host running Ubuntu Server 20.04, IP address
10.21.5.83 - ABC-DT001-L-VM0.foo.bar.com - VM, guest running Xubuntu 20.04, IP address
10.21.5.104 - ABC-NB001.foo.bar.com - notebook, running Windows 10 with AD user, IP address
10.21.5.104 - Network bridge - converted the single network interface of the ABC-DT001-L to a bridge, which in return is used by every VM I am planning to run
- HP ProCurve Switch 1810g-8 J9449A - managed switch my machines are connected to inside my office(I have no access to it), IP address
10.21.99.10 (or so the label says) - Default gateway - IP address
10.21.5.1 - DHCP server - IP address
10.21.1.3 - DNS server - IP address
10.21.1.3 - Primary WINS server - IP address
10.21.1.10 From our IT department I was told that if a machine has a permanent MAC address I can always e.g. ABC-DT001-L.foo.bar.com to access my ABC-DT001-L machine. On a KVM level I have the default bridge (virbr0 or something) that is set upon creating a VM in order to allow the VM's system to synch time somehow as well as my own bridge that binds the VM to the only Ethernet port my host has, allowing it to access the Internet as well as the internal network of our institute. So in terms of MAC addresses on my PC I have - ABC-DT001-L.foo.bar.com - MAC address
14:b3:1f:07:ee:5a - ABC-DT001-L-VM0.foo.bar.com - MAC address
52:54:00:2a:b8:4f regarding my bridge. In case you are interested for the default bridge I have 52:54:00:58:04:50 (host side) and 52:54:00:1e:cf:8b (VM side). Here is what works: - ABC-NB001 - can ping and SSH ABC-DT001-L
- ABC-NB001 - can ping, SSH and VNC connect to ABC-DT001-L-VM0 (port 5900 for cockpit, port 5901 for any other VNC viewer since I also run X11VNC in parallel to what cockpit already provides to allow access in the future to just the VM and not the underlying server)
- ABC-DT001-L - can ping and SSH ABC-DT001-L-VM0, can ping ABC-NB001
- ABC-DT001-L-VM0 - can ping and SSH ABC-DT001-L, can ping ABC-NB001
In addition I can ping and access the web console of ABC-DT001-L from our Windows servers (the ones we will be converting to Linux in the future). Colleagues of mine, working in home office, can also (through our VPN) ping and access ABC-DT001-L. Here is what doesn't work: - Access to ABC-DT001-L-VM0 from our servers including even just ping
- Access to ABC-DT001-L-VM0 from my colleagues machines through VPN
So it appears anything beyond the switch is unable to access the VM(s) in any manner possible. One thing I have noticed, which I guess is to be expected, is that behind the switch I can also simply use the hostname instead of the full name to get access to all of my machines. It appears that the problem is the bridge. Before I go poke the bear (our IT department) I would like to give it a shot in solving the issue (with your help of course). |
How to add a single header for any incoming mail with Postfix? Posted: 10 Apr 2022 02:12 AM PDT I have Postfix running with a number of smtpd processes configured in master.cf like this: # Internet facing one 1.2.3.4:25 inet n - y - - smtpd -o ... # internet-only overrides # Internal facing one 10.0.0.1:10026 inet n - y - - smtpd -o ... # internal-only overrides Now, I'd like to add a single header, with static name and value, to incoming mail depending on which smtpd it was received on. Example: X-Gert-Postfix-Received-From: the evil internet My options considered: Add the header_checks option and use the PREPEND action in the file. Nearly there, but: - It requires to match an existing header and will then add one more on subsequent matches.
- I don't always have a certain header present already, perhaps even a
From is missing, for example. - In case you have existing
header_checks , there's no easy way to stack two header_check files, I think. Build a custom app that uses the Milter protocol and hook that up to Postfix with smtpd_milters . Of course, this will work. I can inspect the mail in my own app, then inject the header there. Seems over-engineering for a simple task like adding a header. Additionally, it requires extra maintenance with the need to run another daemon app, quite some boilerplate code, etc. As suggested in a comment, use check_recipient_access (related Q). Same downsides as header_checks (see 1). I feel like I'm missing something simple. Anyone got a better idea? |
Connecting public ip to localhost Posted: 10 Apr 2022 03:17 AM PDT I've got a server hosting a webpage. From this webpage I want to be able to send http-Requests to a server hosted on localhost. For this purpose I've got a server socket running on localhost. However I cannot seem to connect my public IP with localhost, although they are running on the same system. I've already tried this connection by running the webpage on localhost itself and it worked. To me it seems that there is some kind of Linux or firewall config, which prevents this connection. Monitoring packets with tcpdump showed that nothing seems to be sent at all or its completly blocked. I've already tried various firewall settings and the suggestion: sysctl -w net.ipv4.conf.all.route_localnet=1 iptables -t nat -I PREROUTING -p tcp --dport 80 -j DNAT --to 127.0.0.1:8080 from https://superuser.com/questions/661772/iptables-redirect-to-localhost but nothing worked so far. I'm using Ubuntu 20.04 as my operating system. I thought maybe someone had the same issue and could help me out here. Thanks in advance. |
My MySQL crashed many times in a month Posted: 10 Apr 2022 05:31 AM PDT My website (Wordpress) sometime stoped working with below error message" cannot connect to Datatabse I checked the log file of MySQL and I found that crash-info as below: ---------- 2021-01-21 0:44:59 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-01-21 0:44:59 0 [Note] InnoDB: Uses event mutexes 2021-01-21 0:44:59 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-01-21 0:44:59 0 [Note] InnoDB: Number of pools: 1 2021-01-21 0:45:00 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-01-21 0:45:00 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M 2021-01-21 0:45:00 0 [Note] InnoDB: Completed initialization of buffer pool 2021-01-21 0:45:00 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2021-01-21 0:45:00 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=215993122 2021-01-21 0:45:07 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-01-21 0:45:07 0 [Note] InnoDB: Uses event mutexes 2021-01-21 0:45:07 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-01-21 0:45:07 0 [Note] InnoDB: Number of pools: 1 2021-01-21 0:45:07 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-01-21 0:45:07 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M 2021-01-21 0:45:07 0 [Note] InnoDB: Completed initialization of buffer pool 2021-01-21 0:45:07 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2021-01-21 0:45:07 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=215993122 2021-01-21 0:50:02 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-01-21 0:50:02 0 [Note] InnoDB: Uses event mutexes 2021-01-21 0:50:02 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-01-21 0:50:02 0 [Note] InnoDB: Number of pools: 1 2021-01-21 0:50:02 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-01-21 0:50:02 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M 2021-01-21 0:50:02 0 [Note] InnoDB: Completed initialization of buffer pool 2021-01-21 0:50:02 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2021-01-21 0:50:02 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=215993122 2021-01-21 0:50:02 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2021-01-21 0:50:02 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2021-01-21 0:50:02 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2021-01-21 0:50:02 0 [Note] InnoDB: Setting file '/opt/lampp/var/mysql/ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2021-01-21 0:50:02 0 [Note] InnoDB: File '/opt/lampp/var/mysql/ibtmp1' size is now 12 MB. 2021-01-21 0:50:02 0 [Note] InnoDB: Waiting for purge to start 2021-01-21 0:50:02 0 [Note] InnoDB: 10.4.11 started; log sequence number 215993131; transaction id 221150 2021-01-21 0:50:02 0 [Note] Plugin 'FEEDBACK' is disabled. 2021-01-21 0:50:02 0 [Note] InnoDB: Loading buffer pool(s) from /opt/lampp/var/mysql/ib_buffer_pool 2021-01-21 0:50:02 0 [Note] Server socket created on IP: '::'. ---------- I restarted MySQL and my website worked well. The version of my MySQL is: Distrib 10.4.11-MariaDB, for Linux (x86_64) Ubuntu verion 20. This situation appeared few times for a month ago. I did search the solution in some posts before, but still cannot fix this issue, MySQL server crashes at least 2 times a week Wordpress + PHP+ apache +mysql, mysql crash every 1/ month Is anyone stuck in this case, and knows how to fix it? |
How to get phpmyadmin to work with both a reverse proxy and a plain IP:PMA_PORT connection? Posted: 10 Apr 2022 04:00 AM PDT I have a kubernetes cluster configured as follows: nginx+wp[:5050] <-- redirect 307 /wp nginx[:80:443] /pma rev_proxy --> nginx+pma[:5000] All services share a common external metallb IP. I have a couple of problems with this configuration: nginx+wordpress server is listening on a nonstandard port 5050 configured to use ssl so when I try connecting directly to this server via METALLB_IP:5050 I get an error 'The plain HTTP request was sent to HTTPS port' which is expected since the browser has no way of knowing that the server is expecting an https request. I attempted to solve this problem by adding this line to my nginx+wordpress configuration: error_page 497 301 =307 https://$host:$server_port$request_uri; It solved the incorrect protocol problem but introduced a new one: now when trying to access my wordpress administration panel through METALLB_IP:5050/wp-admin I get 'ERR_TOO_MANY_REDIRECTS' - I assume it's because wp-admin does it's own redirection which causes an infinite loop. I tried adding define( 'REDIRECTION_DISABLE', true ); to my wp-config.php but the error persisted. Note: only one port [:5050] is allowed to be open for this service! I've setup a reverse proxy to phpmyadmin with the following block: location /phpmyadmin/ { proxy_pass https://phpmyadmin-service:5000/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } But after I input my name and password and click 'Go' my uri is changed to https://METALLB_IP/index.php and I get a '404 Not Found' since my nginx+phpmyadmin server listens on port 5000 not 80/443! I should also mention that this error occurs only when trying to connect through a reverse proxy, if I specify METALLB_IP:5000 to connect, everything works fine. Any advice or links to the relevant parts of the documentation would be greatly appreciated! edit: I now have a half working solution for phpmyadmin reverse proxy: if I add $cfg['PmaAbsoluteUri'] = '/phpmyadmin'; to my config.inc.php, METALLB_IP/phpmyadmin login/logout starts working! But the downside of this solution is that now a login with direct connect through METALLB_IP:5000 gives this error: 'Failed to set session cookie. Maybe you are using HTTP instead of HTTPS to access phpMyAdmin.' This issue might be related to #16496 My current phpmyadmin 5.0.4 configuration: $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['compress'] = false; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['Servers'][$i]['host'] = "mysql-service"; $cfg['Servers'][$i]['port'] = "3306"; $cfg['Servers'][$i]['user'] = "mysql"; $cfg['Servers'][$i]['password'] = "mysql"; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['PmaAbsoluteUri'] = '/phpmyadmin'; |
How can I redirect back from https to http URLs after I removed a SSL certificate of lets encrypt in apache2 & nginx Posted: 09 Apr 2022 10:06 PM PDT I configured a certificate of let's encrypt using certbot-auto and the https worked but when I was trying to remove the certificate of my domain using certbot-auto delete... my wordpress and phpmyadmin site stopped being recognized, it keeps redirecting to https and it gives an error 404 can't reach this page but the apache2 and nginx index page could be reached... I already did all of that cleared the cache, history, cookies, checked the mysql database it's on http:// but still redirects it to https:// and says it can't reach the page... I need to check for something in the deeper level, but I don't know what to check I tried to delete the ssl.confs that were made by let's encrypt but still 0 results I need help re-configuring my servers back to the http redirect from the https redirect of the certificate configuration What are the processes needed to be done in order to turn it back to http redirect? Thanks for your help... |
Why does "file:///Users/username/Library/proxy.pac" not work in MacOS? Posted: 10 Apr 2022 02:03 AM PDT I tested setting up a proxy.pac file via a web server: networksetup -setautoproxyurl "Wi-Fi" "http://localhost/proxy.pac" and via a file directly: networksetup -setautoproxyurl "Wi-Fi" "file:///Users/username/Library/proxy.pac" The web server method works (provided I arrange that the proxy.pac is served by a webserver). The file server doesn't work. What's the problem, fundamentally? Motivation for the question is, I'd like to simplify and prefer to not have to run an http server. |
Lua missing socket.http module Posted: 10 Apr 2022 01:06 AM PDT I am using Lua with HAProxy. I have set LUA_PATH and LUA_CPATH as the following: LUA_PATH=/usr/bin/lua;/usr/bin/lua5.3;/usr/share/lua/5.3/ltn12.lua;/usr/share/lua/5.3/mime.lua;/usr/share/lua/5.3/socket.lua;/usr/share/lua/5.3/ssl.lua;/usr/share/lua/5.3/posix/_argcheck.lua;/usr/share/lua/5.3/posix/compat.lua;/usr/share/lua/5.3/posix/deprecated.lua;/usr/share/lua/5.3/posix/init.lua;/usr/share/lua/5.3/posix/sys.lua;/usr/share/lua/5.3/posix/util.lua;/usr/share/lua/5.3/posix/version.lua;/usr/share/lua/5.3/socket/ftp.lua;/usr/share/lua/5.3/socket/headers.lua;/usr/share/lua/5.3/socket/http.lua;/usr/share/lua/5.3/socket/smtp.lua;/usr/share/lua/5.3/socket/tp.lua;/usr/share/lua/5.3/socket/url.lua;/usr/share/lua/5.3/term/colors.lua;/usr/share/lua/5.3/term/cursor.lua;/usr/share/lua/5.3/term/init.lua;/usr/share/lua/5.3/ssl/https.lua;/usr/share/lua/5.3/luarocks/add.lua;/usr/share/lua/5.3/luarocks/admin_remove.lua;/usr/share/lua/5.3/luarocks/build.lua;/usr/share/lua/5.3/luarocks/cache.lua;/usr/share/lua/5.3/luarocks/cfg.lua;/usr/share/lua/5.3/luarocks/command_line.lua;/usr/share/lua/5.3/luarocks/config_cmd.lua;/usr/share/lua/5.3/luarocks/deps.lua;/usr/share/lua/5.3/luarocks/dir.lua;/usr/share/lua/5.3/luarocks/doc.lua;/usr/share/lua/5.3/luarocks/download.lua;/usr/share/lua/5.3/luarocks/fetch.lua;/usr/share/lua/5.3/luarocks/fs.lua;/usr/share/lua/5.3/luarocks/help.lua;/usr/share/lua/5.3/luarocks/index.lua;/usr/share/lua/5.3/luarocks/install.lua;/usr/share/lua/5.3/luarocks/lint.lua;/usr/share/lua/5.3/luarocks/list.lua;/usr/share/lua/5.3/luarocks/loader.lua;/usr/share/lua/5.3/luarocks/make.lua;/usr/share/lua/5.3/luarocks/make_manifest.lua;/usr/share/lua/5.3/luarocks/manif_core.lua;/usr/share/lua/5.3/luarocks/manif.lua;/usr/share/lua/5.3/luarocks/new_version.lua;/usr/share/lua/5.3/luarocks/pack.lua;/usr/share/lua/5.3/luarocks/path_cmd.lua;/usr/share/lua/5.3/luarocks/path.lua;/usr/share/lua/5.3/luarocks/persist.lua;/usr/share/lua/5.3/luarocks/purge.lua;/usr/share/lua/5.3/luarocks/refresh_cache.lua;/usr/share/lua/5.3/luarocks/remove.lua;/usr/share/lua/5.3/luarocks/repos.lua;/usr/share/lua/5.3/luarocks/require.lua;/usr/share/lua/5.3/luarocks/search.lua;/usr/share/lua/5.3/luarocks/show.lua;/usr/share/lua/5.3/luarocks/site_config.lua;/usr/share/lua/5.3/luarocks/type_check.lua;/usr/share/lua/5.3/luarocks/unpack.lua;/usr/share/lua/5.3/luarocks/upload.lua;/usr/share/lua/5.3/luarocks/util.lua;/usr/share/lua/5.3/luarocks/validate.lua;/usr/share/lua/5.3/luarocks/write_rockspec.lua;/usr/share/lua/5.3/luarocks/build/builtin.lua;/usr/share/lua/5.3/luarocks/build/cmake.lua;/usr/share/lua/5.3/luarocks/build/command.lua;/usr/share/lua/5.3/luarocks/build/make.lua;/usr/share/lua/5.3/luarocks/fs/lua.lua;/usr/share/lua/5.3/luarocks/fs/tools.lua;/usr/share/lua/5.3/luarocks/fs/unix.lua;/usr/share/lua/5.3/luarocks/fs/win32.lua;/usr/share/lua/5.3/luarocks/fetch/cvs.lua;/usr/share/lua/5.3/luarocks/fetch/git_file.lua;/usr/share/lua/5.3/luarocks/fetch/git_http.lua;/usr/share/lua/5.3/luarocks/fetch/git_https.lua;/usr/share/lua/5.3/luarocks/fetch/git.lua;/usr/share/lua/5.3/luarocks/fetch/git_ssh.lua;/usr/share/lua/5.3/luarocks/fetch/hg_http.lua;/usr/share/lua/5.3/luarocks/fetch/hg_https.lua;/usr/share/lua/5.3/luarocks/fetch/hg.lua;/usr/share/lua/5.3/luarocks/fetch/hg_ssh.lua;/usr/share/lua/5.3/luarocks/fetch/sscm.lua;/usr/share/lua/5.3/luarocks/fetch/svn.lua;/usr/share/lua/5.3/luarocks/tools/patch.lua;/usr/share/lua/5.3/luarocks/tools/tar.lua;/usr/share/lua/5.3/luarocks/tools/zip.lua;/usr/share/lua/5.3/luarocks/upload/api.lua;/usr/share/lua/5.3/luarocks/upload/multipart.lua LUA_CPATH=/usr/lib64/lua/5.3/posix/ctype.so;/usr/lib64/lua/5.3/posix/dirent.so;/usr/lib64/lua/5.3/posix/errno.so;/usr/lib64/lua/5.3/posix/fcntl.so;/usr/lib64/lua/5.3/posix/fnmatch.so;/usr/lib64/lua/5.3/posix/glob.so;/usr/lib64/lua/5.3/posix/grp.so;/usr/lib64/lua/5.3/posix/libgen.so;/usr/lib64/lua/5.3/posix/poll.so;/usr/lib64/lua/5.3/posix/pwd.so;/usr/lib64/lua/5.3/posix/sched.so;/usr/lib64/lua/5.3/posix/signal.so;/usr/lib64/lua/5.3/posix/stdio.so;/usr/lib64/lua/5.3/posix/stdlib.so;/usr/lib64/lua/5.3/posix/syslog.so;/usr/lib64/lua/5.3/posix/termio.so;/usr/lib64/lua/5.3/posix/time.so;/usr/lib64/lua/5.3/posix/unistd.so;/usr/lib64/lua/5.3/posix/utime.so;/usr/lib64/lua/5.3/mime/core.so;/usr/lib64/lua/5.3/socket/core.so;/usr/lib64/lua/5.3/socket/serial.so;/usr/lib64/lua/5.3/socket/unix.so;/usr/lib64/lua/5.3/term/core.so;/usr/lib64/lua/5.3/ssl.so;/usr/lib64/lua/5.3/zlib.so My haproxy.cfg file tries to load auth-request.lua in the global section of the file. When I try to start HAProxy, the service fails and doing journalctl -xe gives the following error: module socket.http not found . I also installed luasocket using luarocks . EDIT I read more about the require function in Lua from this post. The following paragraph was of interest: The path used by require is a little different from typical paths. Most programs use paths as a list of directories wherein to search for a given file. However, ANSI C (the abstract platform where Lua runs) does not have the concept of directories. Therefore, the path used by require is a list of patterns, each of them specifying an alternative way to transform a virtual file name (the argument to require) into a real file name. More specifically, each component in the path is a file name containing optional interrogation marks. For each component, require replaces each ?´ by the virtual file name and checks whether there is a file with that name; if not, it goes to the next component. So, following this, I modified my LUA_PATH and LUA_CPATH as follows: LUA_CPATH="/usr/lib64/lua/5.3/?/?.so" LUA_PATH="/usr/share/lua/5.3/?/?.lua" Unfortunately, the error is still there. |
How to stop the Exchange 2010 server only -O365 migration Posted: 10 Apr 2022 12:03 AM PDT I have recently done a migration to 365 and I have an in house Exchange 2010 server. As I expected there are people who are still connected to the old server even though there has been numerous emails regarding the update. I really wish I could shut down the Exchange server, so users with outdated setting will get bumped and we can ID them that way, but I have an inventory software running on that same server, so I can not physically turn it off. What is the best way to stop the Exchange 2010 (ONLY) from running and responding without shutting it down. Thanks |
Cant add more servers to Galera Cluster --> [ERROR] /usr/sbin/mysqld: unknown option '--.' Posted: 10 Apr 2022 12:03 AM PDT Ive tried to setup a MariaDB cluster with Galera on my Debian 8 server. When I try to add servers to the one that already exist in the cluster with: systemctl start mysql I get this error: Job for mariadb.service failed. See 'systemctl status mariadb.service' and 'journalctl -xn' for details. I found the line in the journalctl -xn output: Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.' So I opened /etc/init.d/mysql with nano , but couldn't find -- anywhere. Does anyone know how to solve this? Here is the result of systemctl status mariadb.service : ● mariadb.service - MariaDB database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled) Drop-In: /etc/systemd/system/mariadb.service.d └─migrated-from-my.cnf-settings.conf Active: failed (Result: exit-code) since Mi 2016-10-12 09:35:29 CEST; 4min 36s ago Process: 5272 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=1/FAILURE) Process: 5267 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 5266 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) Main PID: 4281 (code=exited, status=0/SUCCESS) Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.32-78.1 started; log sequence number 1616869 Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.' Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] Aborting' Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1 Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server. Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state. Here is the result of journalctl -xn : -- Logs begin at Di 2016-10-11 16:10:14 CEST, end at Mi 2016-10-12 09:35:29 CEST. -- Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.' Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] Aborting' Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1 Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server. -- Subject: Unit mariadb.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit mariadb.service has failed. -- -- The result is failed. Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state. lines 1-18/18 (END) -- Logs begin at Di 2016-10-11 16:10:14 CEST, end at Mi 2016-10-12 09:35:29 CEST. -- Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.32-78.1 starte Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled. Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.' Okt 12 09:35:29 node2 sh[5272]: 2016-10-12 9:35:27 3045435776 [ERROR] Aborting' Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1 Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server. -- Subject: Unit mariadb.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit mariadb.service has failed. -- -- The result is failed. Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state. The: /etc/init.d/mysql # and its wrapper script "mysqld_safe". ### END INIT INFO # set -e set -u ${DEBIAN_SCRIPT_DEBUG:+ set -v -x} test -x /usr/sbin/mysqld || exit 0 . /lib/lsb/init-functions SELF=$(cd $(dirname $0); pwd -P)/$(basename $0) CONF=/etc/mysql/my.cnf MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf" # priority can be overriden and "-s" adds output to stderr ERR_LOGGER="logger -p daemon.err -t /etc/init.d/mysql -i" # Safeguard (relative paths, core dumps..) cd / umask 077 # mysqladmin likes to read /root/.my.cnf. This is usually not what I want # as many admins e.g. only store a password without a username there and # so break my scripts. export HOME=/etc/mysql/ # Source default config file. [ -r /etc/default/mariadb ] && . /etc/default/mariadb ## Fetch a particular option from mysql's invocation. # # Usage: void mysqld_get_param option mysqld_get_param() { /usr/sbin/mysqld --print-defaults \ | tr " " "\n" \ | grep -- "--$1" \ | tail -n 1 \ | cut -d= -f2 } ## Do some sanity checks before even trying to start mysqld. sanity_checks() { # check for config file if [ ! -r /etc/mysql/my.cnf ]; then log_warning_msg "$0: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" echo "WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" | $ERR_LOGGER fi # check for diskspace shortage datadir=`mysqld_get_param datadir` if LC_ALL=C BLOCKSIZE= df --portability $datadir/. | tail -n 1 | awk '{ exit ($4>4096) }'; then log_failure_msg "$0: ERROR: The partition with $datadir is too full!" echo "ERROR: The partition with $datadir is too full!" | $ERR_LOGGER exit 1 fi } ## Checks if there is a server running and if so if it is accessible. # # check_alive insists on a pingable server # check_dead also fails if there is a lost mysqld in the process list # # Usage: boolean mysqld_status [check_alive|check_dead] [warn|nowarn] mysqld_status () { ping_output=`$MYADMIN ping 2>&1`; ping_alive=$(( ! $? )) ps_alive=0 pidfile=`mysqld_get_param pid-file` if [ -f "$pidfile" ] && ps `cat $pidfile` >/dev/null 2>&1; then ps_alive=1; fi if [ "$1" = "check_alive" -a $ping_alive = 1 ] || [ "$1" = "check_dead" -a $ping_alive = 0 -a $ps_alive = 0 ]; then return 0 # EXIT_SUCCESS else if [ "$2" = "warn" ]; then echo -e "$ps_alive processes alive and '$MYADMIN ping' resulted in\n$ping_output\n" | $ERR_LOGGER -p daemon.debug fi return 1 # EXIT_FAILURE fi } # # main() # case "${1:-''}" in 'start') sanity_checks; # Start daemon log_daemon_msg "Starting MariaDB database server" "mysqld" if mysqld_status check_alive nowarn; then log_progress_msg "already running" log_end_msg 0 else # Could be removed during boot test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # Start MariaDB! /usr/bin/mysqld_safe "${@:2}" > /dev/null 2>&1 & # 6s was reported in #352070 to be too little for i in $(seq 1 "${MYSQLD_STARTUP_TIMEOUT:-60}"); do sleep 1 if mysqld_status check_alive nowarn ; then break; fi log_progress_msg "." done if mysqld_status check_alive warn; then log_end_msg 0 # Now start mysqlcheck or whatever the admin wants. output=$(/etc/mysql/debian-start) [ -n "$output" ] && log_action_msg "$output" else log_end_msg 1 log_failure_msg "Please take a look at the syslog" fi fi ;; 'stop') # * As a passwordless mysqladmin (e.g. via ~/.my.cnf) must be possible # at least for cron, we can rely on it here, too. (although we have # to specify it explicit as e.g. sudo environments points to the normal # users home and not /root) log_daemon_msg "Stopping MariaDB database server" "mysqld" if ! mysqld_status check_dead nowarn; then set +e shutdown_out=`$MYADMIN shutdown 2>&1`; r=$? set -e if [ "$r" -ne 0 ]; then log_end_msg 1 [ "$VERBOSE" != "no" ] && log_failure_msg "Error: $shutdown_out" log_daemon_msg "Killing MariaDB database server by signal" "mysqld" killall -15 mysqld server_down= for i in `seq 1 600`; do sleep 1 if mysqld_status check_dead nowarn; then server_down=1; break; fi done if test -z "$server_down"; then killall -9 mysqld; fi fi fi if ! mysqld_status check_dead warn; then log_end_msg 1 log_failure_msg "Please stop MariaDB manually and read /usr/share/doc/mariadb-server-10.1/README.Debian.gz!" exit -1 else log_end_msg 0 fi ;; 'restart') set +e; $SELF stop; set -e $SELF start ;; 'reload'|'force-reload') log_daemon_msg "Reloading MariaDB database server" "mysqld" $MYADMIN reload log_end_msg 0 ;; 'status') if mysqld_status check_alive nowarn; then log_action_msg "$($MYADMIN version)" else log_action_msg "MariaDB is stopped." exit 3 fi ;; 'bootstrap') # Bootstrap the cluster, start the first node # that initiates the cluster log_daemon_msg "Bootstrapping the cluster" "mysqld" $SELF start "${@:2}" --wsrep-new-cluster ;; *) echo "Usage: $SELF start|stop|restart|reload|force-reload|status|bootstrap" exit 1 ;; esac Here is the my.cnf: # MariaDB database server configuration file. # # You can copy this file to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc_messages_dir = /usr/share/mysql lc_messages = en_US skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # max_connections = 100 connect_timeout = 5 wait_timeout = 600 max_allowed_packet = 16M thread_cache_size = 128 sort_buffer_size = 4M bulk_insert_buffer_size = 16M tmp_table_size = 32M max_heap_table_size = 32M # # * MyISAM # # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched. On error, make copy and try a repair. myisam_recover_options = BACKUP key_buffer_size = 128M #open-files-limit = 2000 table_open_cache = 400 myisam_sort_buffer_size = 512M concurrent_insert = 2 read_buffer_size = 2M read_rnd_buffer_size = 1M # # * Query Cache Configuration # # Cache only tiny result sets, so we can fit more in the query cache. query_cache_limit = 128K query_cache_size = 64M # for more write intensive setups, set to DEMAND or OFF #query_cache_type = DEMAND # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # we do want to know about network errors and such log_warnings = 2 # # Enable the slow query log to see queries with especially long duration #slow_query_log[={0|1}] slow_query_log_file = /var/log/mysql/mariadb-slow.log long_query_time = 10 #log_slow_rate_limit = 1000 log_slow_verbosity = query_plan #log-queries-not-using-indexes #log_slow_admin_statements # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #report_host = master1 #auto_increment_increment = 2 #auto_increment_offset = 1 log_bin = /var/log/mysql/mariadb-bin log_bin_index = /var/log/mysql/mariadb-bin.index # not fab for performance, but safer #sync_binlog = 1 expire_logs_days = 10 max_binlog_size = 100M # slaves #relay_log = /var/log/mysql/relay-bin #relay_log_index = /var/log/mysql/relay-bin.index #relay_log_info_file = /var/log/mysql/relay-bin.info #log_slave_updates #read_only # # If applications support it, this stricter sql_mode prevents some # mistakes like inserting invalid dates etc. #sql_mode = NO_ENGINE_SUBSTITUTION,TRADITIONAL # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! default_storage_engine = InnoDB # you can't just change log file size, requires special procedure #innodb_log_file_size = 50M innodb_buffer_pool_size = 256M innodb_log_buffer_size = 8M innodb_file_per_table = 1 innodb_open_files = 400 innodb_io_capacity = 400 innodb_flush_method = O_DIRECT # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem # # * Galera-related settings # [galera] # Mandatory settings #wsrep_on=ON #wsrep_provider= #wsrep_cluster_address= #binlog_format=row #default_storage_engine=InnoDB #innodb_autoinc_lock_mode=2 # # Allow server to accept connections on all interfaces. # #bind-address=0.0.0.0 # # Optional setting #wsrep_slave_threads=1 #innodb_flush_log_at_trx_commit=0 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The custom config I used and named galera.cnf : [mysqld] .#mysql settings binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 innodb_doublewrite=1 query_cache_size=0 query_cache_type=0 bind-address=0.0.0.0 #galera settings wsrep_on=ON wsrep_provider=/usr/lib/galera/libgalera_smm.so wsrep_cluster_name="skyfillers_cluster" wsrep_cluster_address=gcomm://192.168.2.162,192.168.2.164,192.168.2.163 wsrep_sst_method=rsync |
Windows Server 2012 R2 sometimes fails to boot (or shut down?) - stuck at spinner Posted: 10 Apr 2022 04:00 AM PDT Our Windows Server 2012 R2 VMware virtual machines have scheduled tasks defined to reboot them weekly using shutdown.exe /r and some of these servers fail to restart some of the time. When I connect to the server via VMRC I see a screen like this: I have to power cycle the server to get it to work again. The System event log doesn't contain any errors. The last messages before shutdown are: The IKE and AuthIP IPsec Keying Modules service entered the stopped state. The kernel power manager has initiated a shutdown transition. After power-cycling there is a message like this: The last shutdown's success status was false. The last boot's success status was false. I tried enabling boot logging, but there are no logs at all for the failed boot in %SystemRoot%\ntbtlog.txt . So either the problem occurs before the first log entry is written or it's actually failing to shut down. If I manually reboot the server, including by running shutdown /r from a command prompt this works. What else can I do to troubleshoot this? |
Switching Log4j mode to DEBUG in Weblogic Server Posted: 10 Apr 2022 03:03 AM PDT I am using weblogic 10.5 and my application has 4 managed servers, having the below mentioned configuration in weblogic startup script to enable log4j. JAVA_PROPERTIES="${JAVA_PROPERTIES} -Dlog4j.configuration=file:${LOG4J_CONFIG_FILE}" I have given the log4j.properties path to LOG4J_CONFIG_FILE. I brought down the server, tried updating the log4j mode from ERROR to DEBUG in the loj.properties and bounced the server. When the server restarts the loj.properties file is updated with ERROR mode. Kindly advice |
How to ban Syn Flood Attacks using Fail2Ban? Posted: 10 Apr 2022 03:49 AM PDT In my log, I am frequently seeing dropped ips like this: > Oct 30 17:32:24 IPTables Dropped: IN=eth0 OUT= > MAC=04:01:2b:bd:b0:01:4c:96:14:ff:df:f0:08:00 SRC=62.210.94.116 > DST=128.199.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=45212 > PROTO=TCP SPT=51266 DPT=5900 WINDOW=1024 RES=0x00 SYN URGP=0 > > Oct 30 17:29:57 Debian kernel: [231590.140175] IPTables Dropped: > IN=eth0 OUT= MAC=04:01:2b:bd:b0:01:4c:96:14:ff:ff:f0:08:00 > SRC=69.30.240.90 DST=128.199.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=245 > ID=12842 DF PROTO=TCP SPT=18534 DPT=8061 WINDOW=512 RES=0x00 SYN > URGP=0 From the above, I am assuming these are the Syn flood that are being dropped by my IpTables rules. This is what I have in iptables for Syn (although not sure which one of these rules are dropping the ones above): # Drop bogus TCP packets iptables -A INPUT -p tcp -m tcp --tcp-flags SYN,FIN SYN,FIN -j DROP iptables -A INPUT -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP # --- Common Attacks: Null packets, XMAS Packets and Syn-Flood Attack --- iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP In Fail2ban, I dont see any specific filter for Syn attacks in filter.d folder. My question for this are: 1) Do I just ignore the above logs and not worry about setting up a Fail2Ban filter for these since its internet and there is constantly going to be script kiddies doing these anyways? 2) Since Fail2ban work based on iptables log, is there a way to ban the above Syn attempts on my server? This is my lame attempt on a filter and its not working. Not sure if its even valid: [Definition] failregex = ^<HOST> -.*IPTables Dropped:.*SYN URGP=0 ignoreregex = I am using Debian + Nginx |
Windows Server 2012 VPN route from LAN to client Posted: 10 Apr 2022 05:00 AM PDT I am trying to set up a VPN using Windows Server 2012 Standard. My network has a SonicWALL edge router, and the server in question is a DC, running DNS, DHCP and RRAS, and the subnet is 10.0.0.0/24. I configured using the 'advanced' wizard, enabling both DirectAccess and VPN. Clients can now connect to the VPN. They receive an IP address from DHCP in the 10.0.0.0/24 subnet and can then access the VPN server using either the PPP adaptor IP or the Ethernet adaptor IP on it. However, if they try to ping anything else, one ping reply is received and the rest disappear. The VPN is successfully routing the client's packets to the destination host - I can see this in Wireshark - but only one reply actually goes to the client; the rest go to the server. Similarly, if I ping the client from an internal host, the first packet goes to the client, and the rest act as if it is addressing the VPN server itself. It is clear that the VPN server is routing things sent from the client to the internal network, but is only routing one packet at a time from the internal network to the client. I can't think of a reason that the behaviour would be different for the first packet than subsequent ones. Any advice would be appreciated. |
Exception Enumerating SQL Server Instances with SMO WMI ManagedComputer Posted: 10 Apr 2022 02:03 AM PDT I'm trying to use the SMO WMI API/objects in PowerShell 2.0 on Windows 7 with SQL Server 2008 R2 installed to get a list of SQL Server instances on the local computer using the Managed Comuter object. However, I'm getting exceptions after I instantiate the objects when I try to access any data on them. I'm running PowerShell as an administrator. $computer = New-Object Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer ($env:computername) $computer.ServerInstances Results in this error: The following exception was thrown when trying to enumerate the collection: "An exception occurred in SMO while trying to manage a service.". At line:1 char:89 + (New-Object Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer ($env:computername)). <<<< ServerInstances + CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException + FullyQualifiedErrorId : ExceptionInGetEnumerator Is there some service I have to enable to get this to work? The WMI service is running. Is there some other setting I need? Why can't I enumerate SQL Server instances? |
squid proxy overrides my host header, what can I do? Posted: 10 Apr 2022 03:03 AM PDT I have the following problem with squid: Some clients access concurrently 62 servers via a squid proxy. All servers have the same host name (www.example.com) but different public IPs (123.123.123.2 to 123.123.123.63). The server owner uses a kind of dns round robin. I only have access to the squid server and to the clients. The connection to the servers is only allowed via the proxy IPs. Without proxy, I just send a request to load data from http//123.123.123.5/dataforme but I send the host header "www.example.com". That works like it should. But now using squid as a non caching proxy this is not possible anymore. Squid overrides the sent request host header. By doing this, the destination server does not know what data to deliver because the host-header is not correctly transmitted. I set url_rewrite_host_header off But this does not have any effect. It seems that this option is only used when using squid as redirector. That is not what I do. The following will work, if there were no other sites to fetch: header_access Host deny all header_replace Host www.example.com But when I do this, every request will get the www.example.com host header. Then it is not possible to access other sites anymore. What can I do? Thanks in advance for your help! |
Using Plain type listener for WebLogic NodeManager Posted: 09 Apr 2022 11:02 PM PDT I have configured our managed servers (on WebLogic 10.3.5) to use SSL with custom identity/trust keystores and all that has been working fine. However, after completing the SSL configuration, we started getting some warning messages saying "Invalid/unknown SSL header was received from peer". After looking through some articles online, the solution seemed to be setting the SecureListener property in nodemanager.properties to "false" and setting the listener type of the Node Manager in the Administration Console to "Plain". This did stop the warning messages from showing up in the log file...however I was wondering if there are any security implications to using a plain vs. SSL listener for the Node Manager. FYI, this is a development environment that's closed off from public access but will eventually be moved to a production environment. |
Universal rewrite rule for versioning css/js files? Posted: 10 Apr 2022 01:06 AM PDT Is there any universal rule to rewrite versions? /css/main.123456.css rewrite to /css/main.css /css/styles.123456.css rewrite to /css/styles.css /js/application.123456.js rewrite to /js/application.js How should I modify my .htaccess file to add universal versioning for css/js files? # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress |
Redirecting port 8080 traffic to a different server in AWS Posted: 09 Apr 2022 11:02 PM PDT I have two servers under the same domain name: - Apache webserver listening to port 80
- Node.js server listening to port 8080
What do I need to do in order to run each server on a different ec2 instance? Can I use the AWS load balancer for this? |
Setting up PerformancePoint Services on Sharepoint 2010: connection errors Posted: 10 Apr 2022 05:00 AM PDT I have tried to setup PerformancePoint Services on SharePoint 2010, but every time I try to use the dashboard designer, I get this error: "An error has occurred attempting to contact the specified SharePoint site" I have tried these steps but it hasn't helped. Any ideas? The event log gives the following information: WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/24724999 Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/client.svc' cannot be activated due to an exception during compilation. The exception message is: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item. ---> System.ArgumentException: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item at System.ServiceModel.UriSchemeKeyedCollection.InsertItem(Int32 index, Uri item) at System.Collections.Generic.SynchronizedCollection`1.Add(T item) at System.ServiceModel.UriSchemeKeyedCollection..ctor(Uri[] addresses) at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.CreateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) --- End of inner exception stack trace --- at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) Process Name: w3wp Process ID: 2576 |
No comments:
Post a Comment