Migrating from Weblogic to Tomcat Posted: 08 Jan 2022 04:37 PM PST I now have a JAVA-EE application running with weblogic with several modules running locally 100%, however it is very slow with weblogic, mainly at republish times in some modules. I would like to know if I can migrate it to run through tomcat 7, I believe it is possible because another team in another project was the same thing and managed to switch to using tomcat locally. Currently I made the settings in the tomcat context.xml - server.xml - web.xml files to connect to the oracle database, but I still can't make the application go up Could someone give me an orientation on how to do this, thank you very much |
CentOS firewall-cmd script to only allow access from IPs listed in sources Posted: 08 Jan 2022 03:59 PM PST I use this script to set up my firewall. I expected to have ssh access from only one IP but that is not the case after testing. What is missing? #!/bin/bash # # Reset to initial install of firewalld # rm -f /etc/firewalld/zones/* firewall-cmd --complete-reload firewall-cmd --runtime-to-permanent firewall-cmd --reload # # Create / Setup custom zone # firewall-cmd --new-zone calzone --permanent firewall-cmd --reload firewall-cmd --zone=calzone --add-service={ssh,dhcpv6-client} firewall-cmd --zone=calzone --add-source=10.0.0.177 firewall-cmd --change-interface enp1s0 --zone calzone --permanent firewall-cmd --runtime-to-permanent firewall-cmd --reload When I run: firewall-cmd --get-active-zones I get the following calzone interfaces: enp1s0 sources: 10.0.0.177 It was my understanding that setting the interface would direct all traffic from that interface to that zone first and since there are entries in the sources the traffic would be limited to those IPs. Thanx in advance. |
Servers loose network connection Posted: 08 Jan 2022 03:51 PM PST Machines loose network connection from time to time. They're all using wifi network with 1 exception (the attic server), the exception also happens to loose the network, but it feels like less times. They're still working, I have a speedtest running on a RPi4 and from the screenshot bellow we can see that there are no stats for a while in the top graph, and on the requests (bottom right) we see the failed attempts. The wifi routers and wifi points are from Google Nest (not sure if this helps in any way). The home set up looks like bellow: I don't even know how to start debugging. Any ideas of what should I do to test/debug this? |
Samba share only works debug/interactive Posted: 08 Jan 2022 01:32 PM PST I set up a very simple samba share: [files] path=/data browseable = Yes read only = No writable = Yes force user = nobody Unfortunately, when I try to access the folder 'files' samba gives this error in the logs: [2022/01/08 16:23:02.713103, 0] ../../source3/smbd/service.c:787(make_connection_snum) make_connection_snum: canonicalize_connect_path failed for service files, path /data However, if I run samba interactively and with debug on: smbd -d 9 -F -i It works just fine. Which makes this difficult to troubleshoot. Not sure what the problem is, assuming it's some kind of permission but I haven't been able to figure it out yet. |
NGINX Proxy Manager and Plesk Obsidian Posted: 08 Jan 2022 12:09 PM PST I am searching for an easy way to setup reverse proxy with nginx proxy manager in combination with Plesk (Obsidian) - how can I do this easy? WHenenver I disable nginx in plesk to use nginx proxy manager the Apache server takes over and interferes. But I dont want to disable the apache as I have websites hosted Is there a way to easily change the ports used by apache ? I tried changing the apache config variables but not sure how Thanks! :) Rob |
eno1: Detected Hardware Unit Hang Posted: 08 Jan 2022 04:21 PM PST Could somebody tell me whats that error means? Is it serious problem or not really? Or how to diagnose it? Could that be related with my network? Seems that my router also got stuck in the same time. I'm using ubuntu 20.04 on i5-2400, 4gb ram, hdd CPU usually isnt going above 50% |
Can't open PID file /run/opendkim/opendkim.pid (yet?) after start: Operation not permitted Posted: 08 Jan 2022 10:45 AM PST i am about 2 hours on configure dkim with postfix on ubuntu 20.04. I try absolutly everything, but dkim wont work. OpenDKIM-Service won´t start: root@mail:~# service opendkim status ● opendkim.service - OpenDKIM DomainKeys Identified Mail (DKIM) Milter Loaded: loaded (/lib/systemd/system/opendkim.service; enabled; vendor preset: enabled) Active: activating (start) since Sat 2022-01-08 19:39:15 CET; 59s ago Docs: man:opendkim(8) man:opendkim.conf(5) man:opendkim-genkey(8) man:opendkim-genzone(8) man:opendkim-testadsp(8) man:opendkim-testkey http://www.opendkim.org/docs.html Process: 62335 ExecStart=/usr/sbin/opendkim -x /etc/opendkim.conf (code=exited, status=0/SUCCESS) Tasks: 7 (limit: 19660) Memory: 2.7M CGroup: /system.slice/opendkim.service ├─62336 /usr/sbin/opendkim -x /etc/opendkim.conf └─62337 /usr/sbin/opendkim -x /etc/opendkim.conf Jan 08 19:39:15 mail.mydomain.de systemd[1]: Starting OpenDKIM DomainKeys Identified Mail (DKIM) Milter... Jan 08 19:39:15 mail.mydomain.de systemd[1]: opendkim.service: Can't open PID file /run/opendkim/opendkim.pid (yet?) after start: Operation not permitted Jan 08 19:39:15 mail.mydomain.de opendkim[62337]: OpenDKIM Filter v2.11.0 starting (args: -x /etc/opendkim.conf) Jan 07 13:32:59 mail.mydomain.de systemd[1]: Starting OpenDKIM DomainKeys Identified Mail (DKIM) Milter... Jan 07 13:32:59 mail.mydomain.de systemd[1]: Started OpenDKIM DomainKeys Identified Mail (DKIM) Milter. Jan 07 13:32:59 mail.mydomain.de opendkim[275965]: OpenDKIM Filter v2.11.0 starting (args: -x /etc/opendkim.conf) Jan 08 10:35:44 mail.mydomain.de systemd[1]: Stopping OpenDKIM DomainKeys Identified Mail (DKIM) Milter... Jan 08 10:35:50 mail.mydomain.de systemd[1]: opendkim.service: Succeeded. Jan 08 10:35:50 mail.mydomain.de systemd[1]: Stopped OpenDKIM DomainKeys Identified Mail (DKIM) Milter. Here is my /etc/opendkim.conf: # This is a basic configuration that can easily be adapted to suit a standard # installation. For more advanced options, see opendkim.conf(5) and/or # /usr/share/doc/opendkim/examples/opendkim.conf.sample. # Log to syslog Syslog yes # Required to use local socket with MTAs that access the socket as a non- # privileged user (e.g. Postfix) UMask 002 # Sign for example.com with key in /etc/dkimkeys/dkim.key using # selector '2007' (e.g. 2007._domainkey.example.com) #Domain example.com #KeyFile /etc/dkimkeys/dkim.key #Selector 2007 # Commonly-used options; the commented-out versions show the defaults. Canonicalization simple Mode sv SubDomains no AutoRestart yes AutoRestartRate 10/1M Background yes DNSTimeout 5 SignatureAlgorithm rsa-sha256 # Always oversign From (sign using actual From and a null From to prevent # malicious signatures header fields (From and/or others) between the signer # and the verifier. From is oversigned by default in the Debian pacakge # because it is often the identity key used by reputation systems and thus # somewhat security sensitive. OversignHeaders From ## ResolverConfiguration filename ## default (none) ## ## Specifies a configuration file to be passed to the Unbound library that ## performs DNS queries applying the DNSSEC protocol. See the Unbound ## documentation at http://unbound.net for the expected content of this file. ## The results of using this and the TrustAnchorFile setting at the same ## time are undefined. ## In Debian, /etc/unbound/unbound.conf is shipped as part of the Suggested ## unbound package # ResolverConfiguration /etc/unbound/unbound.conf ## TrustAnchorFile filename ## default (none) ## ## Specifies a file from which trust anchor data should be read when doing ## DNS queries and applying the DNSSEC protocol. See the Unbound documentation ## at http://unbound.net for the expected format of this file. TrustAnchorFile /usr/share/dns/root.key #OpenDKIM user # Remember to add user postfix to group opendkim UserID opendkim # Map domains in From addresses to keys used to sign messages KeyTable refile:/etc/opendkim/key.table SigningTable refile:/etc/opendkim/signing.table # Hosts to ignore when verifying signatures ExternalIgnoreList /etc/opendkim/trusted.hosts # A set of internal hosts whose mail should be signed InternalHosts /etc/opendkim/trusted.hosts Socket local:/var/spool/postfix/opendkim/opendkim.sock Here is my /etc/default/opendkim.conf # Command-line options specified here will override the contents of # /etc/opendkim.conf. See opendkim(8) for a complete list of options. #DAEMON_OPTS="" # # Uncomment to specify an alternate socket # Note that setting this will override any Socket value in opendkim.conf # default: SOCKET="local:/var/spool/postfix/opendkim/opendkim.sock" # listen on all interfaces on port 54321: #SOCKET="inet:54321" # listen on loopback on port 12345: #SOCKET="inet:12345@localhost" # listen on 192.0.2.1 on port 12345: #SOCKET="inet:12345@192.0.2.1" Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master" or # on-line: http://www.postfix.org/master.5.html). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (no) (never) (100) # ========================================================================== smtp inet n - y - - smtpd #smtp inet n - y - 1 postscreen #smtpd pass - - y - - smtpd #dnsblog unix - - y - 0 dnsblog #tlsproxy unix - - y - 0 tlsproxy submission inet n - y - - smtpd -o smtpd_enforce_tls=yes -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o syslog_name=postfix/submission # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_tls_auth_only=yes # -o smtpd_reject_unlisted_recipient=no # -o smtpd_client_restrictions=$mua_client_restrictions # -o smtpd_helo_restrictions=$mua_helo_restrictions # -o smtpd_sender_restrictions=$mua_sender_restrictions # -o smtpd_recipient_restrictions= # -o smtpd_relay_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - y - - smtpd # -o syslog_name=postfix/smtps # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_reject_unlisted_recipient=no # -o smtpd_client_restrictions=$mua_client_restrictions # -o smtpd_helo_restrictions=$mua_helo_restrictions # -o smtpd_sender_restrictions=$mua_sender_restrictions # -o smtpd_recipient_restrictions= # -o smtpd_relay_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - y - - qmqpd pickup unix n - y 60 1 pickup cleanup unix n - y - 0 cleanup qmgr unix n - n 300 1 qmgr #qmgr unix n - n 300 1 oqmgr tlsmgr unix - - y 1000? 1 tlsmgr rewrite unix - - y - - trivial-rewrite bounce unix - - y - 0 bounce defer unix - - y - 0 bounce trace unix - - y - 0 bounce verify unix - - y - 1 verify flush unix n - y 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - y - - smtp relay unix - - y - - smtp -o syslog_name=postfix/$service_name # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - y - - showq error unix - - y - - error retry unix - - y - - error discard unix - - y - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - y - - lmtp anvil unix - - y - 1 anvil scache unix - - y - 1 scache postlog unix-dgram n - n - 1 postlogd # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} smtps inet n - y - - smtpd -o smtpd_tls_wrappermode=yes Here is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP myhostname = mail.mydomain.de biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on # fresh installs. compatibility_level = 2 message_size_limit = 10240000 # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_security_level=may smtp_tls_CApath=/etc/ssl/certs smtp_tls_security_level=may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_recipient_restrictions = permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender # Milter configuration milter_default_action = accept milter_protocol = 6 smtpd_milters = local:opendkim/opendkim.sock non_smtpd_milters = $smtpd_milters smtpd_sender_restrictions = permit_mynetworks, permit_sasl_authenticated mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all #custom for kopano virtual_alias_maps = hash:/etc/postfix/virtual virtual_mailbox_maps = mysql:/etc/postfix/mysql-users.cf virtual_transport = lmtp:[localhost]:2003 virtual_mailbox_domains = mydomain1.de, mydomain2.de # smtpd_tls_ciphers = medium smtpd_tls_mandatory_ciphers = medium tls_medium_cipherlist = ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 smtpd_tls_mandatory_protocols = TLSv1.2 smtpd_tls_protocols = TLSv1.2 smtpd_tls_security_level = may smtp_tls_security_level = may smtp_use_tls = no smtpd_client_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_rbl_client zen.spamhaus.org smtp_send_xforward_command = yes smtpd_authorized_xforward_hosts = 127.0.0.0/8 [::1]/128 smtpd_sasl_auth_enable = no virtual_mailbox_base = /var/qmail/mailnames virtual_uid_maps = static:30 virtual_gid_maps = static:31 |
AWS Free Tier Termination Projected Costs Posted: 08 Jan 2022 12:37 PM PST Is there a way to see what the past costs for an account would be without any free-tier discounting, or what the project monthly costs for that account will be after the free-tier has terminated? I see no obvious strategy for this in the billing dashboard nor AWS forums, and got nowhere with any of my searches. Since this basically affects everyone, at one time or another, with a bias towards new users (the majority of people affected by the Free Tier), I'd expect there to be a tool for this. But, then, AWS billing is not the most intuitive for those very same new users. Thank you. |
Why CentOS mirrors are HTTP and not HTTPS? Posted: 08 Jan 2022 11:45 AM PST Quite a general question ... but this is something I couldn't get by head around as to why so thought I'd post to get more eyes on the matter. As far as I know HTTP is prone to man in the middle attacks. As such the repositories in Alpine or the Centos Mirrors are not HTTPS. Olden days having HTTPS used to be an expensive matter, it costed server CPU time and the Certificates weren't free. But its 2022 now, and we have plenty of ways to overcome those problems and security has been top priority than ever! Any thoughts on this or any suggestions to obtain binaries more smarter? Also is this a problem in wider Linux community? ie: Ubuntu/Mint/OpenSuse etc? |
Best way to route traffic based on logged in user via specific redundant route? Posted: 08 Jan 2022 01:46 PM PST I have an Ubuntu 20.04 machine with 2 ethernet interfaces with 2 IP addresses each. It's an AWS EC2 instance and each of the 4 IP addresses has an EIP attached to it via NAT. Both interfaces connect to the same internal subnet. The setup looks like this: EC2 Machine: - eni1: - private-IP1 -> public-IP1
- private-IP2 -> public-IP2
- eni2: - private-IP3 -> public-IP3
- private-IP4 -> public-IP4
All 4 addresses are reachabale from the outside so that seems to be all fine. However for outgoing traffic currently always private-IP1 (and thus public-IP1) gets used. I want to specify that individual SSH users use specific IP addresses, so they'll come from the corresponding public IP when talking to services on the internet i.e. user1 -> private-IP1 user2 -> private-IP2 user3 -> private-IP3 user4 -> private-IP4 What's the best way of achieving this result? |
Why am I hitting file number limit prematurely? Posted: 08 Jan 2022 02:54 PM PST A custom application running on one of our development systems (Ubuntu 21.10 Desktop) is bombing out with the error socket: Too many open files . So I immediately checked the limits. Here's the output: $ ulimit -n 65535 Now the reason this is strange is because the application is only using 1019 sockets when it bombs out. Given that there may be a few other file descriptors open I figured it is hitting a 1024 limit. Why is the desktop imposing a 1024 limit when ulimit -n clearly says the limit is 65535? Just to make this even more strange. I have two applications. An epoll based web scraper and a PACKET_MMAP based application that sends a SYN to multiple web servers to start a connection. The epoll application does not bomb out and yet uses much more than 1024 sockets. While it is the PACKET_MMAP based application using raw sockets that bombs out. Also, the same applications tested on a server (also running Ubuntu) don't bomb out. So whatever the problem is it is specific to the Desktop and to the raw sockets application. EDIT: Output of cat /proc/pid/limits as requested: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 31106 31106 processes Max open files 1024 1048576 files Max locked memory 1025802240 1025802240 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 31106 31106 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us The output shows that soft limit of max open files is in fact 1024 contrary to the output of ulimit -n . What gives? |
Outbound proxy using multiple public IP addresses on EC2 Posted: 08 Jan 2022 01:50 PM PST We have a bunch of backend servers in the form of EC2 instances based in a private subnet in AWS VPC that need to talk to a 3rd party API. This API is limiting the requests we can send based on the originating ip address and while scaling our setup we have started hitting the limits on the IP of the NAT gateway that is used for all outbound traffic. Thus I want to setup a proxy for outbound traffic with several EIPs attached. For testing I am currently using an Amazon Linux 2 instance with 2 ENIs with 2 EIPs attached each. The backend servers open an SSH tunnel to the outbound proxy and map the 3rd party API to a local port, an entry in the servers hosts file redirects all traffic to that hostname to localhost and this setup is working fine in general but outbound traffic from the proxy is always using only the first associated EIP. So my setup looks like this: ENI1: eth0 private IP1: 10.0.11.81 private IP2: 10.0.11.82 ENI2: eth1 private IP3: 10.0.11.52 private IP4: 10.0.11.53 original route table: default via 10.0.11.1 dev eth0 default via 10.0.11.1 dev eth1 metric 10001 10.0.11.0/24 dev eth0 proto kernel scope link src 10.0.11.81 10.0.11.0/24 dev eth1 proto kernel scope link src 10.0.11.52 169.254.169.254 dev eth0 I now want to be able to specify which backend server uses which EIP when calling the API via the outbound proxy. My first try was the following: - setup 4 different users on the proxy host
- add iptable rules for each user like so:
iptables -t nat -m owner --uid-owner user1 -A POSTROUTING -j SNAT --to IP1 etc. - this works for the 2 IPs that are attached to the primary ENI (ie eth0 in the machine) but does not work for the 2 IPs associated with the second ENI (eth1)
- adding
-o eth1 to the statement did not work either My next try was to create custom routing tables for each IP address and matching them to policy rules: - create custom route table i.e. for IP3:
default via 10.0.11.1 dev eth1 10.0.11.0/24 dev eth1 proto static scope link src 10.0.11.52 169.254.169.254 dev eth1 scope link - create iptables rule to mark traffic originating from user3:
-A OUTPUT -m owner --uid-owner 1003 -j MARK --set-xmark 0x3/0xffffffff - create rule to utilize custom route table for all packets marked 3:
32763: from all fwmark 0x3 lookup ip3 - this again does not work. packets do get treated differently. all users can communicate with the world except for user3 in the above example.
What am I doing wrong? Is there something trivial I am missing or is my entire approach doomed to fail? I'm very open to suggestions, both on getting this setup working as well as alternative approaches... Thanks a lot in advance! |
Unable to install K3s on Proxmox VM Posted: 08 Jan 2022 02:32 PM PST I'm trying to create an HA k3s cluster using Proxmox and a small fleet of Raspberriy PIs 4B. For the PIs everything works fine, but when trying to install a master on a Proxmox VM it will not start. My Setup: - Host: Proxmox 7.0.7 (I tried with 6.4.4 as well)
- Guest: Ubuntu 20.04.2
- K3S: v1.21.3+k3s1 (I tried with v1.19.13+k3s1 as well)
- MariaDB: 10.3
I'm running these commands in order to install the master export K3S_DATASTORE_ENDPOINT='mysql://DB_USER:DB_PASSWORD@tcp(DB_IP:DB_PORT)/DB_SCHEME' curl -sfL https://get.k3s.io | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san NGINX_LOADBALANCER_IP This is the output of installation and startup: [INFO] Using v1.19.13+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.1 3+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19 .13+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s Job for k3s.service failed because the control process exited with error code. See "systemctl status k3s.service" and "journalctl -xe" for details. I checked the logs: systemctl status k3s.service ● k3s.service - Lightweight Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Tue 2021-08-03 20:27:40 UTC; 2s ago Docs: https://k3s.io Process: 6181 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS) Process: 6193 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Process: 6194 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Process: 6195 ExecStart=/usr/local/bin/k3s server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san NGINX_LOADBALANNCER_IP (code=exited, status=1/FAILURE) Main PID: 6195 (code=exited, status=1/FAILURE) Aug 03 20:27:40 k3svm1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE Aug 03 20:27:40 k3svm1 systemd[1]: k3s.service: Failed with result 'exit-code'. Aug 03 20:27:40 k3svm1 systemd[1]: Failed to start Lightweight Kubernetes. and: journalctl -u k3s.service Aug 03 20:06:17 k3svm1 systemd[1]: Starting Lightweight Kubernetes... Aug 03 20:06:17 k3svm1 sh[19450]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service Aug 03 20:06:17 k3svm1 sh[19451]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory Aug 03 20:06:17 k3svm1 k3s[19460]: time="2021-08-03T20:06:17Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" Aug 03 20:06:17 k3svm1 k3s[19460]: time="2021-08-03T20:06:17Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/9df574741d2573cbbe6616e8624488b36b3340d077bc50da7cb167f1b08a64d1" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.748397535Z" level=info msg="Starting k3s v1.21.3+k3s1 (1d1f220f)" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.751745749Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.751876220Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.753348552Z" level=info msg="Database tables and indexes are up to date" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.757714719Z" level=info msg="Kine listening on unix://kine.sock" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.764631916Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.765377675Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.766187231Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.766815165Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.767415198Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 > Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.767950031Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.768698847Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:0> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.769745716Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.770870630Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:1> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.771882180Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.772508382Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.773399505Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.813171353Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1628021178: notBefore=2021-08-03 20:06:18 +0000 UTC notAfter=2022-08-03 20:06:18 +0000 UTC" Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.813556476Z" level=info msg="Active TLS secret (ver=) (count 9): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-> Aug 03 20:06:18 k3svm1 k3s[19460]: time="2021-08-03T20:06:18.819862032Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token" Aug 03 20:06:18 k3svm1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE Aug 03 20:06:18 k3svm1 systemd[1]: k3s.service: Failed with result 'exit-code'. Aug 03 20:06:18 k3svm1 systemd[1]: Failed to start Lightweight Kubernetes. Aug 03 20:06:23 k3svm1 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 1. Aug 03 20:06:23 k3svm1 systemd[1]: Stopped Lightweight Kubernetes. Aug 03 20:06:23 k3svm1 systemd[1]: Starting Lightweight Kubernetes... Aug 03 20:06:23 k3svm1 sh[19478]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service Aug 03 20:06:23 k3svm1 sh[19483]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.115279840Z" level=info msg="Starting k3s v1.21.3+k3s1 (1d1f220f)" Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.119390931Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.119554649Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.121305745Z" level=info msg="Database tables and indexes are up to date" Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.125898745Z" level=info msg="Kine listening on unix://kine.sock" Aug 03 20:06:24 k3svm1 k3s[19489]: time="2021-08-03T20:06:24.146164308Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token" Aug 03 20:06:24 k3svm1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE Aug 03 20:06:24 k3svm1 systemd[1]: k3s.service: Failed with result 'exit-code'. Aug 03 20:06:24 k3svm1 systemd[1]: Failed to start Lightweight Kubernetes. Aug 03 20:06:29 k3svm1 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 2. Aug 03 20:06:29 k3svm1 systemd[1]: Stopped Lightweight Kubernetes. Aug 03 20:06:29 k3svm1 systemd[1]: Starting Lightweight Kubernetes... Aug 03 20:06:29 k3svm1 sh[19507]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service Aug 03 20:06:29 k3svm1 sh[19508]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.565328025Z" level=info msg="Starting k3s v1.21.3+k3s1 (1d1f220f)" Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.568959518Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.568994906Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.570693830Z" level=info msg="Database tables and indexes are up to date" Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.575194321Z" level=info msg="Kine listening on unix://kine.sock" Aug 03 20:06:29 k3svm1 k3s[19511]: time="2021-08-03T20:06:29.594809727Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token" Aug 03 20:06:29 k3svm1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE Aug 03 20:06:29 k3svm1 systemd[1]: k3s.service: Failed with result 'exit-code'. Aug 03 20:06:29 k3svm1 systemd[1]: Failed to start Lightweight Kubernetes. Aug 03 20:06:34 k3svm1 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 3. Aug 03 20:06:34 k3svm1 systemd[1]: Stopped Lightweight Kubernetes. Aug 03 20:06:34 k3svm1 systemd[1]: Starting Lightweight Kubernetes... Aug 03 20:06:34 k3svm1 sh[19527]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service Aug 03 20:06:34 k3svm1 sh[19530]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory From there the restarts continue. As mentioned I've tried with multiple versions, but nothing seems to work. Also I am not really getting an error. The only hint I've found in different GitHub issues was to enable containerization regarding Raspberry PI by editing /boot/cmdline.txt. However I am not getting the issue on a PI but rather on a Proxmox-VM. Is there something I am missing? Somehow this guy managed to get it to run in the same setup. Did someone else make it run and might provide some reference? |
ssh-add returns "Error connecting to agent: No such file or directory" even though agent is running Posted: 08 Jan 2022 12:06 PM PST Windows 10 20H2, build 19042.685 I'm trying to use the SSH agent in the built-in OpenSSH client on Windows 10. The agent is running: C:\Users\Daniel> Get-Service | ?{$_.Name -like '*ssh-agent*'} Status Name DisplayName ------ ---- ----------- Running ssh-agent OpenSSH Authentication Agent However, ssh-add is still throwing the same error: C:\Users\Daniel> ssh-add C:\Users\Daniel\.ssh\id_ed25519 Error connecting to agent: No such file or directory Any ideas? |
Windows Event Forwarding and Sysmon Posted: 08 Jan 2022 03:08 PM PST I'm dealing with a bit of an issue relating to WEF and sysmon I have the collector server setup and 2 domain controllers are configured via GPO to send events to WEF collector. It is configured via Source initiated but it seems there might be something missing in the configuration. I used https://www.syspanda.com/index.php/2017/03/01/setting-up-windows-event-forwarder-server-wef-domain-gpo-deployment-part-33/ config guide but excluded step 4. Step 1: Create WinRM Service and set it to start automatically Launch your group policy utility and perform the following: Right click your computer OU and Create GPO in this domain, and link it here Provide a name (WEF Deployment) , click OK Right click your newly created GPO WEF Deployment and select Edit Navigate to Computer Configuration > Preferences > Control Panel Settings > "New > Service" Startup: AutomaticService Name: WinRMService Action: Start service Click Apply Step 2: Provide Event Log Reader Access In this step we will add the Network Service & Event Forwarder Server (WindowsLogCollector) to the Event Log Readers and Groups. This will give our WEF server (WindowsLogCollector) access to your domain endpoint event logs. Right click your WEF Deployment GPO and select Edit Computer Configuration > Preferences > Control Panel Settings > right click > "New Group" Action: Update Group Name: Event Log Readers Members: NETWORK SERVICE Domain\WindowsLogCollector$ Apply > OK Step 3: Adding WEF Server Subscription address This will allow our endpoints to enroll to our WindowsLogCollector subscriptions. Right click your WEF Deployment GPO and select Edit Computer Configuration > Policies > Administrative Templates > Windows Components > Event Forwarding > Configure target Subscription Manager > Set to EnableShow: Server=http://WindowsLogCollector.domain.COM:5985/wsman/SubscriptionManager/WEC Step 4: Allow Remote server Management through WinRM Right click your WEF Deployment GPO and select Edit Computer Configuration > Policies > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRMService > Allow Remote Server Management through WinRM Set: EnableiPv4 Filter: * (or you may enter just the IP address of your WindowsLogCollector) IpV6 Filter: * (you may uncheck this) Is there something missing after applying log reader access? As it still says source computers is 0 Perhaps a permissions issue. |
Error 404 when trying to access Kubernetes dashboard from remote laptop using SSH proxy Posted: 08 Jan 2022 12:06 PM PST I have a remote cluster on a remote private Cloud to which I have only SSH access (no GUI). I started the proxy server with: kubectl proxy --address=0.0.0.0 --accept-hosts=.* And started a local SSH proxy to the remote K8s master with: ssh -L 8001:127.0.0.1:8001 -N -f $MASTER_IP The dashboard is accessible from the following address on my local laptop: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login For the token I created a CluserRoleBinding and retrieved the token using (You can find detailed instructions in a reply on this link) : kubectl describe secret $ROLE-TOKEN But once I click sign in, I get: 404 Not found. The server could not find the requested resource. What is the reason for this and how to get around it ? Technical details: OS: Debian 10 Kubernetes installed with Kubespray Kubespray version: 2.12.0 Kubernetes version: 1.16.3 Dashboard version: 1.10.1 |
WinRM remoting server with local Adminnistrator account not working? Posted: 08 Jan 2022 11:07 AM PST I have a server Windows 2012R2 in the domain. This server has no domain user access, only local admin user. I can RDC into this machine using admin account, but cannot have a PSSession , so Enter-PSSession or Invoke-Command or New-PSSession does not work. I have set the Trustedhosts value to '*' already. Still does not work. The example: $cred = Get-Credential # username: Aministrator, password: secret123 Enter-PSSession -computername SVR1 -Credential $cred Immediately I get error: Enter-PSSession : Connecting to remote server SVR1 failed with the following error message : The user name or password is incorrect. For more information, see the about_Remote_Troubleshooting Help topic. Why I cannot login to the server with local admin acount? EDIT: After the comment below, I have tried, SVR1\ADMINISTRATOR as username, then I go a different error message: Enter-PSSession : Connecting to remote server SVR1 failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: We can't sign you in with this credential because your domain isn't available. Make sure your device is connected to your organization's network and try again. If you previously signed in on this device with another credential, you can sign in with that credential. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. |
pfSense - NAT not working Posted: 08 Jan 2022 02:02 PM PST I have a pfSense on Proxmox VM. I have two IP addreses configured: WAN: xx.xx.88.24 -> public IP accesible from internet LAN: 192.168.1.100 -> corporate intranet I want to access an internal server from WAN. For example, I have a server with 192.168.1.110 IP with a HTTPD server running on default 80 port. Then, I have created a NAT rule in pfSense: Interface Protocol Dest. Address Dest. Ports NAT IP NAT PORTS WAN TCP/UDP xx.xx.88.24 80(HTTP) 192.168.1.110 80(HTTP) With the firewall rule created in the NAT configuration. When trying to test this configuration, I recieve a timeout from the browser. In the Diagnosticts -> States I only see this entries: Interface Protocol Source -> Destination State WAN tcp (my PC IP) -> 192.168.1.110:80(xx.xx.88.24:80) CLOSED:SYN_SENT LAN tcp (my PC IP) -> 192.168.1.110:80 CLOSED:SYN_SENT Any idea? HTTP service is up and runnning in internal server, so internal rule in the server firewall added to accept http service request. TIA |
"ip rule to" works but "ip rule fwmark" fails - why? Posted: 08 Jan 2022 02:09 PM PST I have a CentOS 6 (kernel 2.6.32) router with working OpenVPN client on it, and I want to redirect some traffic via VPN server. Client (192.168.60.159 ) sends request to router (192.168.60.6:1443 ), and router redirects it via VPN connection (10.200.0.54 ) to server (185.61.149.21:443 ). I've created a specific routing table tunde and a rule to use this table on packets MARKed by iptables. This rule has default GW on VPN while main routing table has real GW. When I try to use ip rule add fwmark 0x64 lookup tunde , it fails. Request passes to server OK, server replies, router receives reply but doesn't pass it to client. But, if I add ip rule add to 185.61.149.21 lookup tunde - all works perfect. (But this rule doesn't satisfy my needs, i need per-port routing) Looks like iptables somehow cannot de-masquerade replies. Any ideas? Thank you! #ip rule ls 32765: from all fwmark 0x64 lookup tunde This routing table has default gateway on VPN: #ip route ls table tunde 10.200.0.53 dev tunde proto kernel scope link src 10.200.0.54 192.168.60.0/24 dev eth0 proto kernel scope link src 192.168.60.6 default via 10.200.0.53 dev tunde src 10.200.0.54 While main routing table has real default gateway: # ip route ls 10.200.0.53 dev tunde proto kernel scope link src 10.200.0.54 192.168.60.0/24 dev eth0 proto kernel scope link src 192.168.60.6 default via 192.168.60.1 dev eth0 Tcpdump shows both requests and replies on TUN interface: # tcpdump -i tunde -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tunde, link-type RAW (Raw IP), capture size 65535 bytes 15:50:37.136708 IP 10.200.0.54.51409 > 185.61.149.21.443: Flags [S], seq 302961260, win 65280, options [mss 1360,nop,wscale 8,nop,nop,sackOK], length 0 15:50:37.209278 IP 185.61.149.21.443 > 10.200.0.54.51409: Flags [S.], seq 390829204, ack 302961261, win 29200, options [mss 1357,nop,nop,sackOK,nop,wscale 9], length 0 15:50:38.219458 IP 185.61.149.21.443 > 10.200.0.54.51409: Flags [S.], seq 390829204, ack 302961261, win 29200, options [mss 1357,nop,nop,sackOK,nop,wscale 9], length 0 15:50:40.136933 IP 10.200.0.54.51409 > 185.61.149.21.443: Flags [S], seq 302961260, win 65280, options [mss 1360,nop,wscale 8,nop,nop,sackOK], length 0 15:50:40.182989 IP 185.61.149.21.443 > 10.200.0.54.51409: Flags [S.], seq 390829204, ack 302961261, win 29200, options [mss 1357,nop,nop,sackOK,nop,wscale 9], length 0 15:50:42.191772 IP 185.61.149.21.443 > 10.200.0.54.51409: Flags [S.], seq 390829204, ack 302961261, win 29200, options [mss 1357,nop,nop,sackOK,nop,wscale 9], length 0 15:50:42.892051 IP 185.61.149.21.443 > 10.200.0.54.51391: Flags [S.], seq 528990609, ack 3345061728, win 29200, options [mss 1357,nop,nop,sackOK,nop,wscale 9], length 0 In iptables' log I also see replies. All packets are MARKed and use correct routing table: # tail -f /var/log/messages Oct 10 15:50:37 toy2 kernel: INPUT from Cli: IN=eth0 OUT= MAC=00:15:5d:3c:bc:03:1c:39:47:f0:74:87:08:00 SRC=192.168.60.159 DST=192.168.60.6 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=19050 DF PROTO=TCP SPT=51409 DPT=1443 WINDOW=65280 RES=0x00 SYN URGP=0 MARK=0x64 Oct 10 15:50:37 toy2 kernel: Forward To TUN: IN=eth0 OUT=tunde SRC=192.168.60.159 DST=185.61.149.21 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=19050 DF PROTO=TCP SPT=51409 DPT=443 WINDOW=65280 RES=0x00 SYN URGP=0 MARK=0x64 Oct 10 15:50:37 toy2 kernel: OUTPUT To TUN: IN= OUT=tunde SRC=192.168.60.159 DST=185.61.149.21 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=19050 DF PROTO=TCP SPT=51409 DPT=443 WINDOW=65280 RES=0x00 SYN URGP=0 MARK=0x64 Oct 10 15:50:37 toy2 kernel: INPUT from TUN: IN=tunde OUT= MAC= SRC=185.61.149.21 DST=10.200.0.54 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=443 DPT=51409 WINDOW=29200 RES=0x00 ACK SYN URGP=0 MARK=0x64 Oct 10 15:50:38 toy2 kernel: INPUT from TUN: IN=tunde OUT= MAC= SRC=185.61.149.21 DST=10.200.0.54 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=443 DPT=51409 WINDOW=29200 RES=0x00 ACK SYN URGP=0 MARK=0x64 Oct 10 15:50:40 toy2 kernel: INPUT from Cli: IN=eth0 OUT= MAC=00:15:5d:3c:bc:03:1c:39:47:f0:74:87:08:00 SRC=192.168.60.159 DST=192.168.60.6 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=19063 DF PROTO=TCP SPT=51409 DPT=1443 WINDOW=65280 RES=0x00 SYN URGP=0 MARK=0x64 Oct 10 15:50:40 toy2 kernel: Forward To TUN: IN=eth0 OUT=tunde SRC=192.168.60.159 DST=185.61.149.21 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=19063 DF PROTO=TCP SPT=51409 DPT=443 WINDOW=65280 RES=0x00 SYN URGP=0 MARK=0x64 Oct 10 15:50:40 toy2 kernel: INPUT from TUN: IN=tunde OUT= MAC= SRC=185.61.149.21 DST=10.200.0.54 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=443 DPT=51409 WINDOW=29200 RES=0x00 ACK SYN URGP=0 MARK=0x64 Oct 10 15:50:42 toy2 kernel: INPUT from TUN: IN=tunde OUT= MAC= SRC=185.61.149.21 DST=10.200.0.54 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=443 DPT=51409 WINDOW=29200 RES=0x00 ACK SYN URGP=0 MARK=0x64 Oct 10 15:50:42 toy2 kernel: INPUT from TUN: IN=tunde OUT= MAC= SRC=185.61.149.21 DST=10.200.0.54 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=443 DPT=51391 WINDOW=29200 RES=0x00 ACK SYN URGP=0 MARK=0x64 (see MARK=0x64 on outgoing and incoming packets) Now, iptables rules: # iptables-save *mangle :PREROUTING ACCEPT [6278:3182515] :INPUT ACCEPT [4169:3043489] :FORWARD ACCEPT [9:468] :OUTPUT ACCEPT [677:98865] :POSTROUTING ACCEPT [703:101438] -A PREROUTING -d 192.168.60.6/32 -i eth0 -p tcp -m tcp --dport 1443 -m state --state NEW -j CONNMARK --set-xmark 0x64/0xffffffff -A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff -A PREROUTING -m connmark --mark 0x64 -j MARK --set-xmark 0x64/0xffffffff -A PREROUTING -m state --state NEW -m connmark ! --mark 0x0 -j CONNMARK --save-mark --nfmask 0xffffffff --ctmask 0xffffffff -A PREROUTING -i tunde -j LOG --log-prefix " INPUT from TUN: " -A PREROUTING -s 192.168.60.159/32 -i eth0 -p tcp -m tcp --dport 1443 -j LOG --log-prefix " INPUT from Cli: " COMMIT *nat :PREROUTING ACCEPT [3487:307264] :POSTROUTING ACCEPT [57:13668] :OUTPUT ACCEPT [57:13668] -A PREROUTING -d 192.168.60.6/32 -i eth0 -p tcp -m tcp --dport 1443 -j DNAT --to-destination 185.61.149.21:443 -A POSTROUTING -o tunde -j LOG --log-prefix " OUTPUT To TUN: " -A POSTROUTING -d 192.168.60.159/32 -o eth0 -p tcp -m tcp --sport 1443 -j LOG --log-prefix " OUTPUT To Cli: " -A POSTROUTING -o tun+ -j MASQUERADE COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [680:99605] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m multiport --dports 22,23 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 1443 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -i tunde -j LOG --log-prefix " Forward From TUN: " -A FORWARD -o tunde -j LOG --log-prefix " Forward To TUN: " -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i eth0 -o tun+ -j ACCEPT -A FORWARD -i tun+ -o eth0 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT |
"Access is denied" on Windows Server 2012 Remote Desktop Session Host Posted: 08 Jan 2022 01:02 PM PST I've got several Windows servers, all with the remote desktop session host role installed, and all normal domain users have starting getting "access is denied" messages when logging in. I've had to give normal users local administrative rights in order for them log in, but for lots of reason this is temporary and I cannot leave all users having local admin rights. Removing the session host role does fix the problem, but these servers are in an RDS farm and need the role installed. I've checked just about everything I can think of - there were some minor GPO changes the day this started and I've gone through all the GPOs looking for anything obvious. I've removed those GPOs from the relevant OUs, and have added fresh servers, but the problem still occurs. Anyone ever seen this sort of problem with remote desktop session hosts? |
RDP into PC with VPN Posted: 08 Jan 2022 04:06 PM PST I have a work PC (Win 7 Ent) (usually given to me by my clients) where I do work related projects and a home PC (Win 10 Pro) where I do my freelancing stuff. Both are connected to my home network. Usually I just RDP from home PC to the work one, spawn the screen across all my three 27" monitors and work happily. But now, I've got a work PC where I must use VPN to access some of the client's internal resources. The VPN client is Cisco AnyConnect Secure Mobility Client, v.4.3.01095 . Once VPN connects I cannot RDP anymore from my home PC, even though we are in the same local home network. I can ping it using local IP, but RDP won't connect. Is there a solution for this? There is no way the client will change any settings on the Cisco server. All I can do is tweak the work PC only. Please advise. |
How can I use environment variables in ProxyPassMatch? Posted: 08 Jan 2022 05:03 PM PST I am having difficulty figuring out how to use environment variables in ProxyPassMatch . My general format: <LocationMatch "(?<THING>Regex)"> ProxyPassMatch http://example.com:8000/%{env:MATCH_THING} ProxyPassReverse / </LocationMatch> I have %{MATCH_THING}e logged and the log shows that the regex-captured URL is capturing what I want it to capture, but every time I try to access the LocationMatched URL through the proxy, I get 404 Not Found. It works when I directly try http://example.com:8000/RegexCapturedURL . Where RegexCapturedURL = %{MATCH_THING}e; Here are some of the ProxyPassMatch lines I have tried so far: ProxyPassMatch http://example.com:8000/%{env:MATCH_THING} ProxyPassMatch http://example.com:8000/%{MATCH_THING} ProxyPassMatch http://example.com:8000/%{THING} ProxyPassMatch http://example.com:8000/%{MATCH_THING}e What am I doing wrong or not understanding correctly? |
Nginx - ip to https Posted: 08 Jan 2022 04:06 PM PST I have a nginx configuration issue. I can't force the use of HTTPS when access services via @ip:port in the browser. For example, i use emby and emby.domain.com redirects to https://emby.domain.com. But myip:8096 (emby port) doesnt redirect to https://... And it's the same for all services. Surprisingly, if i only enter the server's ip without port, it redirects me to https://myip and i get 404 error. Here's my server blocks : ssl_certificate /crt/ssl.crt; ssl_certificate_key /crt/ssl.key; # redirect 80 to 443 server { listen 80; return 301 https://$host$request_uri; } # stop main domain access server { listen 443 ssl; server_name domain.com www.domain.com; ssl on; location / { return 404; } } # a service for example server { listen 443 ssl; server_name my.domain.com www.my.domain.com; ssl on; location / { proxy_pass http://localhost:8000; } } Have you and idea ? :) |
Cannot access D-Link DES 3200-10 after resetting account Posted: 08 Jan 2022 01:02 PM PST I've forgot my password and after googling I've found that in recovery mode I can reset my account. Connected to Switch using console port (VT100+ Keyboard compatible) and pressed Shift+6. I accessed recovery mode successfully. using the below command I reset (deleted all accounts as it show empty account by running show account) #reset account After rebooting I cannot access using empty username and password! What am I doing wrong ? DES-3200-10 Fast Ethernet Switch Command Line Interface Firmware: Build 4.38.B012 Copyright(C) 2012 D-Link Corporation. All rights reserved. Username: User Access Verification Username: User Access Verification Username:admin Password: Fail! Password: |
Nginx as reverse proxy with IBM Websphere upstream Posted: 08 Jan 2022 02:02 PM PST I have an IBM WebSphere serving multiple domains: x.x.x.x:8080/app1 x.x.x.x:9090/app2 ... I need to configure Nginx as reverse proxy to serve: app1.example.com app2.example.com Here is my config but it's not working: server { listen 443 ssl; server_name www.app1.example.com app1.example.com; ssl on; ssl_certificate example.com.crt; ssl_certificate_key example.com.key; ssl_trusted_certificate example.comCA.crt; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_prefer_server_ciphers on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; keepalive_timeout 70; location / { access_log app1-access.log; error_log app1-error.log; include /etc/nginx/mime.types; proxy_pass http://x.x.x.x:8080/app1/; add_header X-Proxy-Cache $upstream_cache_status; add_header Front-End-Https on; add_header Cache-Control "public, must-revalidate"; add_header Strict-Transport-Security "max-age=2592000; includeSubdomains"; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } With this config I can get the login screen, but I get 404 error after putting credentials. |
How to remove this warning "This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register." Posted: 08 Jan 2022 12:40 PM PST I get the following error when i am listing the updates.Server is currently registered with ULN. Although its a warning, i do not want the below message displayed when i am issuing yum command.i found out subscription-manager plugin is loaded. How to disable subscription-manager plugin?? Loaded plugins: downloadonly, product-id, rhnplugin, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. |
Write .htaccess file to configure subfolder as documentroot Posted: 08 Jan 2022 11:07 AM PST I have the following directory structure. /var/www/html is the DocumentRoot. var www html demo css abc.css js abc.js index.html How can I write an .htaccess file in the "demo" folder such that all resources in that folder think that "demo" is the documentRoot instead. ex: http://myserver.com/demo/css/abc.css should work. In the code of index.html, I just want to refer to the CSS files this way: <link href="/css/abc.css" media="screen" rel="stylesheet" type="text/css"> instead of "/demo/css/abc.css" |
Enforce minimum wait time for network connection Posted: 08 Jan 2022 05:03 PM PST We are currently using multiple HP ProCurve V1810-48G switches in our environment which we believe are causing issue with GPO based software installations. This problem surfaced after installing SSD drives into new laptops which seems to have sped up the boot processes enough to cause issues. We cannot reproduce this issue using HDD's. It feels like the issue is related to Spanning Tree support. The computer appears to be making the connection long enough to satisfy the "wait for network" condition, but is being disconnected prior processing the GPOs and contacting the file share. joeqwerty commented that the root solution is to turn on PortFast or the equivalent, but this setting is non-configurable on the V1810. I had found information on modifying "GpNetworkStartTimeoutPolicyValue" to specify a wait time before continuing to boot, however as soon as the network is detected the Timeout expires and booting continues. Modifying this value has no effect in testing. I tried inserting a startup script to ping localhost for a set amount of time to induce a delay as suggested in other places online, but that method also did not have the intended effect. It seems the simplest solution is to upgrade the switches to a model that has this feature exposed but at this time that is not an option I have available. Does anyone know of a workaround that could force the boot process to wait for the duration timeout even if the network is detected briefly? |
how to configure IIS 7 so partial content requets/range requests on mp4 static files work. Posted: 08 Jan 2022 03:08 PM PST I need help on how to setup IIS 7 so it can handle partial requests/range requests for serving mp4 files so Chrome can position seek and loop video. From what I have read IIS is supposed to support this out of the box, but my setup does not appear to be honoring this. I also read though that if it is hosting ASP.NET site it will not honor the range requests. I do not need ASP.NET features - but I do not see how to change this either in IIS. |
No comments:
Post a Comment