Saturday, January 15, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


How are Kubernetes persistent volumes related to AzureDisks in AKS?

Posted: 15 Jan 2022 06:50 AM PST

Say I have One K8s node with Two pods. Each pod claims 5Gi of PV storage. The provisioned AKS VM (node) has a 32GiB SKU SSD AzureDisk data drive.

Will/can both of the 5Gi K8s volumes be located on the same AzureDisk?

If one AzureDisk is required per K8s volume, why would you claim anything but the AzureDisk SKU storage size for a pod (assuming you target only Azure)?

If one AzureDisk can be shared between multiple K8s volumes, then how do you map the paths for two pods to different AzureDisk paths? Or do they get separate partitions?

Can you move a K8s volume to another AzureDisk if you provision a bigger one?

The thing is I have many pods that need PV, but they only need a small amount. Much less than the smallest SKU (4GB SSD, 32GB HDD). I guess one option would be to co-locate all containers in one Pod, but that just doesn't seem right either ...

Using mail as the subdomain for webmail?

Posted: 15 Jan 2022 06:30 AM PST

I recently purchased a domain and I am planning to use the subdomain smtp.example.com for my mail server and mail.example.com to host my webmail app. Could there be any issues with using mail for something other than a mail server?

How to ensure Cloud-Init runs exactly once and once only?

Posted: 15 Jan 2022 07:12 AM PST

As far as I can see, cloud-init runs every time the config changes. Not just the very first time the system boots, but every time the provided configuration changes. This makes somewhat sense, as I guess it's hard to define the "first time" (the cloned VM already ran before being frozen and used as template, so it's never really the first time). However, I've—from time to time, quite rarely, but still—found that cloud-init re-runs on already provisioned system when they reboot.

Some steps, however, seem to screw up the setup when cloud-init is run on a fully configured system. For example, if it is run one more time after the initial setup and cloud-init sets some configuration to value X, but you had it manually overridden afterwards to Y and now cloud-init re-runs to set it back to X. Or, have the system recreate your SSH host keys.

I've thus found it quite useful to manually run:

sudo touch /etc/cloud/cloud-init.disabled  

...after the initial setup to prevent it from ever running cloud-init again. (In cases, where cloud-init really is only used for an initial "clone & set IPs/hostname" kind of configurations.)

But is there any way to automate this? Like adding some parameter to the /etc/cloud/cloud.cfg that disabled itself after the next run?

Why is my nginx rate limiting not working?

Posted: 15 Jan 2022 08:06 AM PST

I have Nginx in front of apache to terminate SSL and filter things. I'm trying to get rate limiting working using these docs.

When I test a loop of 10 requests with curl, all the requests for /mytest/ are still being forwarded back to Apache - 8 of them show the same timestamp down to the second. I'm expecting only one request/second should reach Apache. What have I overlooked?

My curl command:

for i in $(seq 1 10); do curl --verbose  --request "GET /mytest" https://myhost.com; done  

nginx.conf:

http {     ...     include /etc/nginx/conf.d/*.conf;     ...  }  

/etc/nginx/conf.d/rate-limit.conf:

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;    server {     location /mytest/ {        limit_req zone=one;    #:: removed burst=5 for testing - made no diff     }  }  

snmp server not receives data

Posted: 15 Jan 2022 02:46 AM PST

I have a problem when using the snmp and the server receives data, The service is running correctly and the port is listening

I do a snmpwalk -v 2c -c mycommunity 192.168.1.82 (which is the same) and it answers me and a snmpwalk -v 2c -c mycommunity localhost and also, that is, the service is working but it does not respond from any machine other than her same a sudo netstat -tulpn | grep snmp sudo netstat -tulpn | grep snmp udp 0 0 0.0.0.0:161 0.0.0.0:* 15014/snmpd

something similar happened to someone?

PowerDNS Slave Refused to receive notification notify Refused-*

Posted: 15 Jan 2022 03:15 AM PST

Hello I got the Error : notify Refused- on the Slave server that waiting for updating record from the Master.

Here is the configuration and detailed info.

Specification
PowerDNS version : 4.5.2
Ubuntu 20.04 Backend : mysql

Master Configuration
pdns.conf

  launch=  allow-axfr-ips=159.223.76.221/32  config-dir=/etc/powerdns  daemon=yes  disable-axfr=no  guardian=yes  local-address=0.0.0.0  local-port=53  log-dns-details=on  loglevel=3  master=yes  slave=no  setgid=pdns  setuid=pdns  socket-dir=/var/pdns  version-string=powerdns  include-dir=/etc/powerdns/pdns.d  api=yes  api-key=24xd  

I can add any records on Master Server without any problem.

Slave Configuration
pdns.conf

  launch=  #guardian=yes  daemon=on  log-dns-details=on  slave=yes  slave-cycle-interval=60  logging-facility=0  log-dns-queries=yes  loglevel=5  include-dir=/etc/powerdns/pdns.d    

On the notify command on Master Server :

 pdns_control notify gogon.xyz

Upon command on Slave DNS:

 tcpdump -n 'host 128.199.220.234 and port 53' -v

Here what I got :

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 08:06:08.926420 IP (tos 0x0, ttl 60, id 24776, offset 0, flags [DF], proto UDP (17), length 55)128.199.220.234.11643 > 159.223.76.221.53: 10150 notify [b2&3=0x2400] SOA? gogon.xyz. (27) 08:06:08.928383 IP (tos 0x0, ttl 64, id 20439, offset 0, flags [none], proto UDP (17), length 55) 159.223.76.221.53 > 128.199.220.234.11643: 10150 notify Refused*- 0/0/0 (27)

Some of the online resources suggest me to allow the port 53/UDP to be opened. Here is my UFW status :

53/tcp ALLOW Anywhere
53/udp ALLOW Anywhere
53/tcp(v6) ALLOW Anywhere(v6)
53/udp (v6) ALLOW Anywhere(v6)

On Slave the record in the database also added :

  +-----------------+----------------------+---------+  | ip              | nameserver           | account |  +-----------------+----------------------+---------+  | 128.199.220.234 | ns2.share-system.com | admin   |  +-----------------+----------------------+---------+  

ns1.share-sytem.com and ns2.share-system.com record A has added to the Domain control and its nameserver record based on the master and slave IP (ns1 -> master, ns2 -> slave)

I have already changed the slave to secondary and master to primary to the pnds.conf without any success.

Any suggestions for solving this issue are appreciated.

Thank you.

Cloud + virtual private network vs. Local physical machine for company internal LARGE Git Repository: which option is better?

Posted: 15 Jan 2022 02:12 AM PST

I am currently using a Git-Repository setup with Gitlab on a cloud server and accessing it with a public IP provided by the vendor.

However as the files stored in the Git repository get big, a lot of network disconnection issues occur causing the git pull/push operations to fail after waiting for a long time.

Currently I have two options: (1) Buy a physical server and setup the Git server locally using an internal company router. (2) Buy a VPN option from the vendor to make the network more stable on the cloud.

The problem I have is, I am pretty new to cloud services and have never used VPN provided by any cloud vendor. For option (2) I am not really sure whether using a VPN would really improve network stability or using a VPN does not really improve stability much because it still would go through the networking switches from my workplace to the vendor.

If anyone can give me some insights about whether option(2) works and/or how enterprises normally deal with their Git repository, I will be grateful.

How to remove S2S connection?

Posted: 15 Jan 2022 04:16 AM PST

I recently tried out snikket on one of my android devices, but then removed it. A couple days later, I noticed there are some S2S connections to "push.snikket.net".

Log entries:

2022-01-14 16:55:49.049 [info] <0.10068.0>@ejabberd_s2s_out:init:289 Outbound s2s connection started: xmpp.mydomain.com -> push.snikket.net  2022-01-14 16:55:49.808 [info] <0.10068.0>@ejabberd_s2s_out:handle_auth_success:223 (tls|<0.10068.0>) Accepted outbound s2s EXTERNAL authentication xmpp.mydomain.com -> push.snikket.net (64.225.64.225)  2022-01-14 16:55:50.779 [info] <0.10069.0>@ejabberd_s2s_in:handle_auth_success:183 (tls|<0.10069.0>) Accepted inbound s2s EXTERNAL authentication push.snikket.net -> xmpp.mydomain.com (::ffff:64.225.64.225)  2022-01-14 16:55:50.966 [info] <0.419.0>@mod_push:notify:514 push.snikket.net rejected notification for ryan@xmpp.mydomain.com (rYCqkWfRza/O) temporarily: recipient-unavailable  

I am using Ejabberd. How can I remove this connection permanently? I've tried sudo ./ejabberdctl stop_s2s_connections but that doesn't seem to remove it completely. Thanks.

VPN without port forwarding using raspberry pi and a VPS

Posted: 15 Jan 2022 06:28 AM PST

I'm trying to setup a simple VPN without port forwarding.

I have:

  • raspberry pi connected to a LAN (eth0 - 192.168.1.0/24)
  • internet accessible vps server
  • laptop & android device that needs access to the LAN using a VPN

I read that I can use tinc to establish a peer to peer connection between the raspberry pi and VPS server. This worked great so now I have created a network between vps and raspberry on 10.0.0.0/32 on dev tun0:

  • VPS running tinc server 10.0.0.1
  • Raspberry pi running tinc client 10.0.0.2 (subnet 10.0.0.0/32 & subnet 192.168.1.0/24)

From VPS I can access the LAN (e.g. 192.168.1.1) over ssh which is great. But the problem now is connecting to the VPS over a new VPN connectiong. For this I installed openvpn on the VPS.

This created a dev tun1 on the VPS, my VPS has 10.8.0.1. When I connect to the VPS over openVPN I get 10.8.0.2 on my client.

I issue is I cannot ping 192.168.1.1 or 10.0.0.2 from the client but I can ping 10.0.0.1.

Any idea what I could be doing wrong?

thanks in advance!

Windows 10: How do I set a hard disk offline by default?

Posted: 15 Jan 2022 05:21 AM PST

I have a disk that I want to keep in an offline state, even across reboots. It's a backup disk that should remain offline until I'm doing a backup. Short of opening the case and pulling the power/data cables, is there a way to force the disk to be offline by default?

This is for Windows 10 Professional RTM. I figure it's probably a registry hack, but I'm not sure where to look. The disk is inside the case of the machine, but it's a dynamic disk because the original intention was to use it with a hot-swap bay.

I know how to do it in Unix/Linix environments with the /etc/fstab file and rc scripts, but Windows is a strange beast when it comes to system stuff.

EDIT: To clarify, I'm running backups manually, not scripted. I would prefer the disk to remain offline after a reboot no matter the previous state.

Amazon SES - Domain Verification - Does it expire?

Posted: 15 Jan 2022 07:03 AM PST

I read the documentation about verifying domains in Amazon SES, and I don't understand - how long does the domain verification last and does it expire?

What happens if I remove the DNS settings after I verified a domain? I tried to do it with the DKIM DNS settings, and there I received a notification that the DKIM of the domain would be revoked, unless I restored them. But with the domain itself, I removed the DNS settings and I didn't receive any notification.

Iptables limit connections

Posted: 15 Jan 2022 05:08 AM PST

I found this solution when using to limit connections:

iptables -A INPUT -p tcp --syn -m connlimit --connlimit-above 5 --connlimit-mask 0 -j DROP  

when testing this rule on my server, I create 10 instances to connect to my server, it works to just allow 5 connections. However, anytime I restart my server the first 5 instances are always connected. I expect the rule will be fair for all instances to connect instead of only the first 5 instances. How Can I achieve this?

PowerDNS & Log4j

Posted: 15 Jan 2022 08:25 AM PST

I'm running a PowerDNS on Linux. It looks like PowerDNS is vulnerable to the new log4j-exploit. Is there any way I can disable the Log4j? From my research it looks like you can change the logging method to syslog, but I'm not quite sure on how to do that.

KVM nat command line

Posted: 15 Jan 2022 08:19 AM PST

What is the correct way to setup NAT networking between KVM vm and host?

KVM vm:

No firewall Installed

$ sudo arp-scan -r 5 -t 1000 --interface=eth0 --localnet

10.0.2.2     52:55:0a:00:02:02    locally administered  10.0.2.3     52:55:0a:00:02:03    locally administered  

$ ip r

default via 10.0.2.2 dev eth0 proto dhcp metric 100  10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100  

ifconfig

eth0: inet 10.0.2.15 netmask 255.255.255.0 broacast 10.0.2.255        ether 52:54:00:12:34:56  lo: inet 127.0.0.1 netmask 255.0.0.0        inet6 ::1  

Host:

:~$ ip r

0.0.0.0/1 via 10.211.1.10 dev tun0   default via 192.168.1.1 dev wlan0 proto dhcp metric 600   10.21xxxxxxxx dev tun0 proto kernel scope link src 10.21xxxxx   xxxxxxxxxxxx dev wlan0   128.0.0.0/1 via 10.211.1.10 dev tun0   192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.172 metric 600   192.168.4.0/22 dev eth0 proto kernel scope link src 192.168.4.8 metric 100   

:~$ ifconfig

 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 10.0.2.3  netmask 255.0.0.0  broadcast 10.255.255.255      inet6 fe80::76c8:79b4:88d4:7f5c  prefixlen 64  scopeid 0x20<link>      ether ec:8e:b5:71:33:6e  txqueuelen 1000  (Ethernet)      RX packets 1700  bytes 194730 (190.1 KiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 2862  bytes 246108 (240.3 KiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0      device interrupt 16  memory 0xe1000000-e1020000       lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536      inet 127.0.0.1  netmask 255.0.0.0      inet6 ::1  prefixlen 128  scopeid 0x10<host>      loop  txqueuelen 1000  (Local Loopback)      RX packets 13251  bytes 7933624 (7.5 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 13251  bytes 7933624 (7.5 MiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500      inet 10.211.1.69  netmask 255.255.255.255  destination 10.211.1.70      inet6 fe80::a920:941c:ffa8:5579  prefixlen 64  scopeid 0x20<link>      unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)      RX packets 4348  bytes 2242726 (2.1 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 3823  bytes 404190 (394.7 KiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 192.168.1.172  netmask 255.255.255.0  broadcast 192.168.1.255      inet6 fe80::651b:5014:7929:9ba3  prefixlen 64  scopeid 0x20<link>      ether d8:55:a3:d5:d1:30  txqueuelen 1000  (Ethernet)      RX packets 114455  bytes 117950099 (112.4 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 67169  bytes 14855011 (14.1 MiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0   

~$ sudo arp-scan -r 5 -t 1000 --localnet

just hangs......  

Host can not ping 10.0.2.2

No firewall enable

Tried

$ sudo ip route add default via 10.0.2.0  $ sudo ip route add default via 10.0.2.2  $ sudo ip route add default via 10.0.2.0/24  

Can NAT work without virsh ?

Can NAT be fixed from command line only ?

Update:

$ sudo ip link add natbr0 type bridge  $ sudo ip link set dev natbr0 up  $ sudo ip link set dev eth0 up  $ sudo ip link set dev eth0 master natbr0  

that works to bridge eth0 slave to kvm - vm can ping other computers on the network. but not the host @Tom Yan answer combined with archlinux-Network_bridge created above commands that can ping other network ip's

So i tried to change working bridge connection to allow host and kvm to talk.

Goal: host$ ping kvm    $ sudo ip link add natbr0 type bridge  $ sudo ip link set dev natbr0 up  $ sudo ip a add 10.0.2.1/24 dev natbr0  $ sudo kvm -m 3G -hdb /dev/sde  -nic bridge,br=natbr0  kvm$ sudo ip link add natbr0 type bridge  kvm$ sudo ip a add 10.0.2.2  kvm$ sudo ip link set dev natbr0 up  kvm can ping it self   

$ ping 10.0.2.2

PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data  64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=0.027 ms  

but kvm$ping 10.0.2.1

Destination Host Unreachable  

host$ ping 10.0.2.2

(just hangs)  

Prefer command line to test the resilience of process/system bare bones vs a lot of scripts that can pose more vulnerability to failure. - command line works or not and errors are more easily traced, isolated and reproducible. Depending on linux flavor, certain scripts/parts of scripts (like those incorporated in xml alternative solutions offered) may work or not work. If bridging with kvm can be reproduced on any linux flavor by following commands above....then it seems possible that kvm NAT can also be achieved using cli commands - just to clarify the point of this post , cli steps to NAT kvm will be more standardized, so preferable.

generally @NikitaKipriyanov answer was the correct road, this was the answer but required a tweak to command

$ sudo kvm -m 3G -hdb /dev/sde -net nic -net user,hostfwd=tcp::1810-:22

using command tweak vm can communicate with internet like default and also communicate with host via ssh. credit to @NikitaKipriyanov and @cnst for the tweak https://stackoverflow.com/a/54120040

User will need to ssh using port 1810 using localhost address

$ ssh p@localhost -p 1810

OpenVPN Domain Suffix

Posted: 15 Jan 2022 05:01 AM PST

I've got an OpenVPN server to access our infrastructure remotely. All internal infrastructure is assigned a DNS name in the form SERVER_NAME.my.company.domain. When on site the the DHCP suffix is set to "my.company.domain" through DHCP option 15. I've tried to do the same through OpenVPN but it doesn't seem to work.

My server configuration has the following:

push "dhcp-option DOMAIN my.company.domain"  push "dhcp-option DNS 10.4.0.21"  push "dhcp-option DNS 10.4.0.22"  

When connecting through OpenVPN Connect on both Mac and Windows the search domain is listed correctly in the log file. DNS servers are pushed correctly as I can access infrastructure through their full DNS name, however when I use only SERVER_NAME without the suffix I'm unable to access anything.

Lost ssh access to Google Cloud VM

Posted: 15 Jan 2022 07:02 AM PST

I have a VM (Debian) running on Google Cloud Platform, but I can't connect via ssh or serial console (can't create an user via startup-script for some reason). Already tried a bunch of troubleshooting guides in order to fix it.

I was using the ssh connection previously with no problems at all. The website and databases running on that VM are still working.

I've tried

1 - Checked if firewall entry "default-allow-ssh" exists

2 - Tried connecting with a different user using cmd

gcloud compute ssh another-username@$PROB_INSTANCE  

3 - Added metadata "startup-script" key with value:

#! /bin/bash  useradd -G sudo USER  echo 'USER:PASS' | chpasswd  

Rebooted (also tried interrupt/start), tried connecting via serial console but it says the login is incorrect. The startup script is not working or not creating my user.

4 - Increased disk size.

5 - Increased memory (upgraded the VM instance type).

6 - Removed ssh keys from both VM details and Metadata tabs, followed by a reboot:

After removing I've tried to generate keys again using command:

gcloud beta compute ssh INSTANCE_NAME -- -vvv   

but it returns:

No zone specified. Using zone [us-east1-b] for instance: [INSTANCE_NAME].  Updating project ssh metadata...⠏Updated [https://www.googleapis.com/compute/beta/projects/PROJECT_NAME].  Updating project ssh metadata...done.  Waiting for SSH key to propagate.  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  

More details

Running

gcloud beta compute ssh --zone ZONE INSTANCE_NAME --project PROJECT_NAME  

returns:

USER@IP_ADDRESS: Permission denied (publickey).  

Running (a second time, after waiting for propagation)

gcloud beta compute ssh INSTANCE_NAME -- -vvv   

returns:

[...]  OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020  debug1: Reading configuration data /home/USER/.ssh/config  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: Applying options for *  debug2: resolve_canonicalize: hostname IP_ADDRESS is address  debug2: ssh_connect_direct  debug1: Connecting to IP_ADDRESS [IP_ADDRESS] port 22.  debug1: Connection established.  debug1: identity file /home/USER/.ssh/google_compute_engine type 0  debug1: identity file /home/USER/.ssh/google_compute_engine-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4p1 Debian-10+deb9u7  debug1: match: OpenSSH_7.4p1 Debian-10+deb9u7 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002  debug2: fd 3 setting O_NONBLOCK  debug1: Authenticating to IP_ADDRESS:22 as 'USER'  debug1: using hostkeyalias: compute.INSTANCE_ID  debug3: hostkeys_foreach: reading file "/home/USER/.ssh/google_compute_known_hosts"  debug3: record_hostkey: found key type ECDSA in file /home/USER/.ssh/google_compute_known_hosts:1  debug3: load_hostkeys: loaded 1 keys from compute.INSTANCE_ID  debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521  debug3: send packet: type 20  debug1: SSH2_MSG_KEXINIT sent  debug3: receive packet: type 20  debug1: SSH2_MSG_KEXINIT received  debug2: local client KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1,ext-info-c  debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com  ,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com,zlib  debug2: compression stoc: none,zlib@openssh.com,zlib  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug2: peer server KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1  debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com  debug2: compression stoc: none,zlib@openssh.com  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug3: send packet: type 30  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug3: receive packet: type 31  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:Or8[...]  debug1: using hostkeyalias: compute.INSTANCE_ID  debug3: hostkeys_foreach: reading file "/home/USER/.ssh/google_compute_known_hosts"  debug3: record_hostkey: found key type ECDSA in file /home/USER/.ssh/google_compute_known_hosts:1  debug3: load_hostkeys: loaded 1 keys from compute.INSTANCE_ID  debug1: Host 'compute.INSTANCE_ID' is known and matches the ECDSA host key.  debug1: Found key in /home/USER/.ssh/google_compute_known_hosts:1  debug3: send packet: type 21  debug2: set_newkeys: mode 1  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug3: receive packet: type 21  debug1: SSH2_MSG_NEWKEYS received  debug2: set_newkeys: mode 0  debug1: rekey after 134217728 blocks  debug1: Will attempt key: /home/USER/.ssh/google_compute_engine RSA SHA256:brI3[...] explicit  debug2: pubkey_prepare: done  debug3: send packet: type 5  debug3: receive packet: type 7  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>  debug3: receive packet: type 6  debug2: service_accept: ssh-userauth  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug3: send packet: type 50  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug3: start over, passed a different list publickey  debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password  debug3: authmethod_lookup publickey  debug3: remaining preferred: keyboard-interactive,password  debug3: authmethod_is_enabled publickey  debug1: Next authentication method: publickey  debug1: Offering public key: /home/USER/.ssh/google_compute_engine RSA SHA256:brI3[...] explicit  debug3: send packet: type 50  debug2: we sent a publickey packet, wait for reply  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug2: we did not send a packet, disable method  debug1: No more authentication methods to try.  USER@IP_ADDRESS: Permission denied (publickey).  ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].  

Update

Followed Alex's suggestions and the serial port output returns:

Welcome to [1mDebian GNU/Linux 9 (stretch)[0m!    [    2.364319] systemd[1]: No hostname configured.  [    2.365157] systemd[1]: Set hostname to <localhost>.  [    3.142016] systemd[1]: google-shutdown-scripts.service: Cannot add dependency job, ignoring: Unit google-shutdown-scripts.service is masked.  [    3.144581] systemd[1]: google-clock-skew-daemon.service: Cannot add dependency job, ignoring: Unit google-clock-skew-daemon.service is masked.  [    3.147589] systemd[1]: google-instance-setup.service: Cannot add dependency job, ignoring: Unit google-instance-setup.service is masked.  [    3.149799] systemd[1]: google-accounts-daemon.service: Cannot add dependency job, ignoring: Unit google-accounts-daemon.service is masked.  [    3.152485] systemd[1]: google-startup-scripts.service: Cannot add dependency job, ignoring: Unit google-startup-scripts.service is masked.  

I really hope there is a fix :/

I'd appreciate any help or tips, Thanks!

What causes SSH error: kex_exchange_identification: Connection closed by remote host?

Posted: 15 Jan 2022 05:08 AM PST

I setup a SSH server online that is publicly accessible by anyone. Therefore, I get a lot of connections from IPs all over the world. Weirdly, none actually try to authenticate to open a session. I can myself connect and authenticate without any problem.

From time to time, I get the error: kex_exchange_identification: Connection closed by remote host in the server logs. What causes that?

Here is 30 minutes of SSH logs (public IPs have been redacted):

# journalctl SYSLOG_IDENTIFIER=sshd -S "03:30:00" -U "04:00:00"  -- Logs begin at Fri 2020-01-31 09:26:25 UTC, end at Mon 2020-04-20 08:01:15 UTC. --  Apr 20 03:39:48 myhostname sshd[18438]: Connection from x.x.x.207 port 39332 on 10.0.0.11 port 22 rdomain ""  Apr 20 03:39:48 myhostname sshd[18439]: Connection from x.x.x.207 port 39334 on 10.0.0.11 port 22 rdomain ""  Apr 20 03:39:48 myhostname sshd[18438]: Connection closed by x.x.x.207 port 39332 [preauth]  Apr 20 03:39:48 myhostname sshd[18439]: Connection closed by x.x.x.207 port 39334 [preauth]  Apr 20 03:59:36 myhostname sshd[22186]: Connection from x.x.x.83 port 34876 on 10.0.0.11 port 22 rdomain ""  Apr 20 03:59:36 myhostname sshd[22186]: error: kex_exchange_identification: Connection closed by remote host  

And here is my SSH configuration:

# ssh -V  OpenSSH_8.2p1, OpenSSL 1.1.1d  10 Sep 2019  # cat /etc/ssh/sshd_config   UsePAM yes  AddressFamily any  Port 22  X11Forwarding no  PermitRootLogin prohibit-password  GatewayPorts no  PasswordAuthentication no  ChallengeResponseAuthentication no  PrintMotd no # handled by pam_motd  AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 /etc/ssh/authorized_keys.d/%u  HostKey /etc/ssh/ssh_host_rsa_key  HostKey /etc/ssh/ssh_host_ed25519_key  KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256  Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr  MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com  LogLevel VERBOSE  UseDNS no  AllowUsers root  AuthenticationMethods publickey  MaxStartups 3:100:60    

After searching the web, I have seen references to MaxStartups indicating that it could be the reason for this error but after changing the default value as shown in my sshd_config and attempting more than 3 connections, the server unambiguously indicates the probem

Apr 20 07:26:59 myhostname sshd[31468]: drop connection #3 from [x.x.x.226]:54986 on [10.0.0.11]:22 past MaxStartups  

So, what causes error: kex_exchange_identification: Connection closed by remote host?

Using traefik and docker to securely expose some containers publicly

Posted: 15 Jan 2022 04:02 AM PST

I have a 16 core, 128GB server that handles all kinds of stuff at home. On a VM I run a Windows Domain controller and all my Windows PC's are joined to that domain.

On the server, I also run multiple services in Docker containers. Initially, I accessed them by remembering the ports I was running them on, but when I found Traefik I set that up and added DNS records to my Domain DNS to point all the services to the IP of the server.

I also setup my own internal Certificate Authority on my pfsense box and created a wildcard certificate for all my Traefik routed services.

I'm using the "official" Docker image of Traefik and my configruration looks like this.

docker-compose.yml

services:    traefik:      image: traefik:1.5.4      restart: always      ports:        - 8088:8080        - 80:80        - 443:443      networks:        - web      volumes:        - /var/run/docker.sock:/var/run/docker.sock        - /docker/containers/traefik/traefik.toml:/traefik.toml        - /docker/containers/traefik/acme.json:/acme.json      container_name: traefik    networks:    web:      external: true  

To traefik.toml I added

# Entrypoints to be used by frontends that do not specify any entrypoint.  # Each frontend can specify its own entrypoints.  #  # Optional  # Default: ["http"]  #  defaultEntryPoints = ["http", "https"]    ################################################################  # Entrypoints configuration  ################################################################    # Entrypoints definition  #  # Optional  # Default:  [entryPoints]    [entryPoints.http]    address = ":80"      [entryPoints.http.redirect]        entryPoint = "https"    [entryPoints.https]    address = ":443"      [entryPoints.https.tls]        [[entryPoints.https.tls.certificates]]          certFile = "/certs/wildcard.internal.my.domain.com.crt"          keyFile = "/certs/wildcard.internal.my.domain.com.key"  

Then to a given Docker container, I set Labels like traefik.basic.frontend.rule to make all the needed settings to make the routing work for that container.

This works great and all traffic to my services can be done using easy to remember URL's and are all encrypted via SSL using the wild card certificate without me having to create new certificates for every server or change configurations.

Now, the "issue" is that I now want to host some public websites on the server. For argument sake, I want everything under internal.my.domain.com to only be accessible within my network and for instance something like foo.my.domain.com and bar.my.domain.com to be accessible from outside. I understand I will have to create public records for those domains pointing them to my server here at home.

But my questions are

  • Can I set up the Docker containers so that some are only accessible inside the network and some outside?
  • Can I setup traefik to handle routing of the traffic to the correct containers and also handle that some are "external" and som are internal only?
  • Can I setup traefik's Let's encrypt integration to handle encryption of all "external" ardresses and keep my own CA's self signed wildcard certificate for my internal services?

Also, having a four-port NIC on my pfsense box and several external IP addresses I'm also thinking about having one external IP address that I use for the public stuff and one that handles my "normal" traffic. To control that the IP I use for all personal traffic isn't as easily know as pinging one of my external hostnames and then DOS'ing me when I play a game :).

  • How would I simplest set this up?
  • Is using a virtual interface on my server (running Ubuntu) or using another dedicated ethernet port (it has two) the best way?
  • How would I setup traefik to handle traffic on multiple interfaces?

Is it possible to see output startup script in compute engine

Posted: 15 Jan 2022 03:44 AM PST

My Compute Engine vm when deployed run a startup script. Everything seems working well, but there is one command in the startup script which I think it doesn't.

I run the command

apt-get update && apt-get upgrade -y  

This should install the newest versions of all packages (right?)

When I do this by hand it works, but it takes a lot of time. If I let the script do it I don't see any output when I connect over ssh so I have to asume it's still running. Is there a way I can see if it is still working and if it has finished or not?

This is the script:

#! /bin/bash  file="/var/www/check.txt"    if [ -e $file ]  then  apt-get update && apt-get upgrade -y  git -C /var/www/html pull https://xxxxxx:xxxxxxxx@bitbucket.org/xxxxxx/xxxxx.git  else  apt-get update  apt-get install apache2 php libapache2-mod-php php-mcrypt php-mysql mysql-client -y  a2dismod autoindex  service apache2 restart    cat <<EOF > /etc/apache2/mods-enabled/dir.conf  <IfModule mod_dir.c>          DirectoryIndex index.php index.cgi index.pl index.html index.xhtml index.htm  </IfModule>    # vim: syntax=apache ts=4 sw=4 sts=4 sr noet  EOF    rm -rf /var/www/html  git clone https://xxxxxx:xxxxxx@bitbucket.org/xxxxxx/xxxxx.git /var/www/html/    cat <<EOF > /etc/apache2/sites-available/xxxxx.conf  <VirtualHost *:80>    ServerName  xxxxxx.com    ServerAlias www.xxxxxx.com    ServerAdmin webmaster@xxxxx.xx    DocumentRoot /var/www/html/wwwroot    ErrorLog ${APACHE_LOG_DIR}/error.log    CustomLog ${APACHE_LOG_DIR}/access.log combined   </VirtualHost>    # vim: syntax=apache ts=4 sw=4 sts=4 sr noet  EOF    cat <<EOF > /etc/apache2/sites-available/020-xxxxx_xxxx.conf  <VirtualHost *:80>    ServerName  xxxx.xxxxx.xxx    ServerAlias xxxx    ServerAdmin webmaster@xxxxx.xx    DocumentRoot /var/www/html/xxxxx    ErrorLog ${APACHE_LOG_DIR}/error.log    CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>    # vim: syntax=apache ts=4 sw=4 sts=4 sr noet  EOF    cat <<EOF > /var/www/html/wwwroot/.htaccess  RewriteEngine On  RewriteCond %{HTTPS} off  RewriteCond %{HTTP:X-Forwarded-Proto} !https  RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]  EOF    sed -i 's/AllowOverride None/AllowOverride All/g' /etc/apache2/apache2.conf    wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy  chmod +x cloud_sql_proxy  mkdir /cloudsql; sudo chmod 777 /cloudsql  ./cloud_sql_proxy -dir=/cloudsql &    #rm /var/www/html/wwwroot/xxxxx/xxxxxx.php      #temporary for testing.  cat <<'EOF' > /var/www/html/wwwroot/includes/xxxxx.xxxx  <?php    error_reporting(E_ALL);  ini_set('display_errors', 1);          $username = "xxxxxx";  $password = "xxxxx";  $host = "/cloudsql/snappy-gantry-xxxxx:europe-west1:db1";  $dbname = "xxxxx";      setlocale(LC_ALL, 'nld_nld');        $options = array(PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8');  try {      $db = new PDO("mysql:unix_socket={$host};dbname={$dbname};charset=utf8", $username, $password, $options);      $db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);      $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);      $db->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);  } catch (PDOException $ex) {      die("Failed to connect to the database: " . $ex->getMessage());  }  if (session_status() == PHP_SESSION_NONE) {      session_start();  }  EOF      a2dissite 000-default  a2ensite 010-xxxxx_main  a2ensite 020-xxxx_help  a2enmod rewrite  service apache2 restart  apt-get update && apt-get upgrade -y  sudo cat <<EOF > /var/www/check.txt  aanwezig!  EOF  fi  

What is the "Microsoft SQL Server 2008 R2 RsFx Driver" and can I run SQL Server without it?

Posted: 15 Jan 2022 02:32 AM PST

Our German hosting company complains about licensing issues with our SQL Server Express installation and demands us to uninstall (or re-license) the following components:

  • Microsoft SQL Server 2008 R2 RsFx Driver
  • Microsoft SQL Server Browser
  • Microsoft SQL Server VSS Writer

From my limited knowledge of SQL Server I'm pretty sure I can uninstall the Browser and VSS Writer without affecting the functionality of the SQL server itself.

But what about RsFx Driver? This sounds much more like a core component that I'd reluctantly install without knowing what its function is. Can someone shed light on this part, please? If it's safe, how do I uninstall that driver? I cannot find it in Programs and Features.

Here are details about the version we have installed:

SELECT @@VERSION  

returns:

Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64)  Jun 17 2011 00:54:03  Copyright (c) Microsoft Corporation  Express Edition (64-bit) on Windows NT 6.1 <X64>  (Build 7601: Service Pack 1)   

How to get contacts of user on exchange 2013 when have full permission on user's mailboxes

Posted: 15 Jan 2022 03:05 AM PST

I want to export user's contacts into .csv file without knowing his password. Now, I can give an account admin full permission to user mailboxes by this cmdlet:

Add-MailboxPermission -Identity abc@example.com -User admin -AccessRights FullAccess  

My question is, with this admin account how can I access and export contacts of abc@example.com?

Or if you have any idea to do this, could you please hint me?

Thx,

FreePBX on AWS EC2

Posted: 15 Jan 2022 06:02 AM PST

I know I can install asterisk, freepbx on ec2 instance from the repository but for some reasons I was hoping to install the freepbx distro from an iso from my local machine to ec2 instance. Can anyone help with a way around it?

I want to install freepbx latest version on ec2 from an iso on my computer, it'd be a tremendous help.

thanks

JBoss webservice behind Reverse Proxy, https to http

Posted: 15 Jan 2022 08:04 AM PST

I have deployed a JAX-WS web service hosted in a JBoss 7.1.1. The webservice is acceded by a reverse proxy. To access the service from the public internet, it has to be done by the https protocol, but the communication between the reverse proxy and the JBoss is in http. So the host present in the wsdl file is http <soap:address location="http://example.com/WS"/>and it has to be <soap:address location="https://example.com/WS"/>.

The JBoss configuration is as follows:

modify-wsdl-addres = true      wsdl-host = jbossws.undefined.host  

Here is the reference for the webservices configuration: https://docs.jboss.org/author/display/AS71/Web+services+configuration

But I can find where to force the protocol to be https in the soap addres.

Allow only root and domain group to login on Linux server

Posted: 15 Jan 2022 07:02 AM PST

I have successfully installed PBIS-open to authenticate against active directory. I also used the /opt/pbis/bin/config RequireMembershipOf command to allow a certain domain group to login.

I would now like to allow root, and the group(s) specified with the /opt/pbis/bin/config RequireMembershipOf command, and deny all other local users to login. Is there any way to do this?

Apache Openssl : error in SSLv2/v3 read client hello A

Posted: 15 Jan 2022 05:01 AM PST

Some Background:

I am trying to setup reverse proxy for my internal business users for site validation when the external route is down. I am able to setup multiple routes with corresponding virtualhosts entries in httpd.conf for port 80 : anonymous user. Am afraid am stuck at SSL route and unable to make progress. I have been to multiple forums but unable to find a response which assists me in moving further.

Server Details:

Apache version: Apache/2.2.29 (Unix) Linux Version: $ cat /etc/*-release Enterprise Linux Enterprise Linux Server release 5.8 (Carthage) Oracle Linux Server release 5.8 Red Hat Enterprise Linux Server release 5.8 (Tikanga)

Problem:

> When I try to access over SSL (*:443) I get empty response on all 3 browsers (IE/Chrome/Firefox)

Note: I generated self signed certificate following instructions at :

www.sslshopper.com/article-how-to-create-and-install-an-apache-self-signed-certificate.html

Troubleshooting So far:

=========== Error Log

[Wed Jul 08 23:16:06 2015] [notice] Digest: generating secret for digest authentication ...  [Wed Jul 08 23:16:06 2015] [notice] Digest: done  [Wed Jul 08 23:16:06 2015] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0x21b6ff0 rmm=0x21b7048 for VHOST: stgwww.cos.agilent.com  [Wed Jul 08 23:16:06 2015] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0x21b6ff0 rmm=0x21b7048 for VHOST: stgwww.cos.agilent.com  [Wed Jul 08 23:16:06 2015] [info] APR LDAP: Built with OpenLDAP LDAP SDK  [Wed Jul 08 23:16:06 2015] [info] LDAP: SSL support available  [Wed Jul 08 23:16:06 2015] [info] mod_unique_id: using ip addr 127.0.0.1  [Wed Jul 08 23:16:07 2015] [info] Init: Seeding PRNG with 144 bytes of entropy  [Wed Jul 08 23:16:07 2015] [info] Loading certificate & private key of SSL-aware server  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_pphrase.c(470): unencrypted RSA private key - pass phrase not required  [Wed Jul 08 23:16:07 2015] [info] Init: Generating temporary RSA private keys (512/1024 bits)  [Wed Jul 08 23:16:07 2015] [info] Init: Generating temporary DH parameters (512/1024 bits)  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(253): shmcb_init allocated 512000 bytes of shared memory  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(272): for 511920 bytes (512000 including header), recommending 32 subcaches, 133 indexes each  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(306): shmcb_init_memory choices follow  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(308): subcache_num = 32  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(310): subcache_size = 15992  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(312): subcache_data_offset = 3208  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(314): subcache_data_size = 12784  [Wed Jul 08 23:16:07 2015] [debug] ssl_scache_shmcb.c(316): index_num = 133  [Wed Jul 08 23:16:07 2015] [info] Shared memory session cache initialised  [Wed Jul 08 23:16:07 2015] [info] Init: Initializing (virtual) servers for SSL  [Wed Jul 08 23:16:07 2015] [info] Configuring server for SSL protocol  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(521): Creating new SSL context (protocols: SSLv3, TLSv1)  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(759): Configuring permitted SSL ciphers [HIGH:MEDIUM:!aNULL:!MD5]  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(843): Configuring server certificate chain (1 CA certificate)  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(890): Configuring RSA server certificate  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(936): Configuring RSA server private key  [Wed Jul 08 23:16:07 2015] [debug] ssl_engine_init.c(521): Creating new SSL context (protocols: SSLv2, SSLv3, TLSv1)  [Wed Jul 08 23:16:07 2015] [info] mod_ssl/2.2.29 compiled against Server: Apache/2.2.29, Library: OpenSSL/0.9.8e-fips-rhel5  [Wed Jul 08 23:16:07 2015] [debug] proxy_util.c(1829): proxy: grabbed scoreboard slot 11 in child 6098 for worker proxy:reverse  [Wed Jul 08 23:16:07 2015] [debug] proxy_util.c(1945): proxy: initialized single connection worker 11 in child 6098 for (*)  ---------  truncated for ease of reading  ---------  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] Connection to child 0 established (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] Seeding PRNG with 144 bytes of entropy  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1903): OpenSSL: Handshake: start  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1911): OpenSSL: Loop: before/accept initialization  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1939): OpenSSL: read 11/11 bytes from BIO#22341b0 [mem: 223b880] (BIO dump follows)  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1872): +-------------------------------------------------------------------------+  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1911): | 0000: 43 4f 4e 4e 45 43 54 20-73 74 67                 CONNECT stg      |  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1917): +-------------------------------------------------------------------------+  **[Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1940): OpenSSL: Exit: error in SSLv2/v3 read client hello A  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] SSL library error 1 in handshake (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] SSL Library Error: 336027803 error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request speaking HTTP to HTTPS port!?  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] Connection closed to child 0 with abortive shutdown (server stgwww.cos.agilent.com:443)**  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] Connection to child 1 established (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] Seeding PRNG with 144 bytes of entropy  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1903): OpenSSL: Handshake: start  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1911): OpenSSL: Loop: before/accept initialization  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1939): OpenSSL: read 11/11 bytes from BIO#22341b0 [mem: 223b880] (BIO dump follows)  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1872): +-------------------------------------------------------------------------+  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1911): | 0000: 43 4f 4e 4e 45 43 54 20-73 74 67                 CONNECT stg      |  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1917): +-------------------------------------------------------------------------+  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1940): OpenSSL: Exit: error in SSLv2/v3 read client hello A  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] SSL library error 1 in handshake (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] SSL Library Error: 336027803 error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request speaking HTTP to HTTPS port!?  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] Connection closed to child 1 with abortive shutdown (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] [client 192.168.244.1] Connection to child 4 established (server stgwww.cos.agilent.com:443)  [Wed Jul 08 23:19:02 2015] [info] Seeding PRNG with 144 bytes of entropy  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1903): OpenSSL: Handshake: start  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_kernel.c(1911): OpenSSL: Loop: before/accept initialization  [Wed Jul 08 23:19:02 2015] [debug] ssl_engine_io.c(1939): OpenSSL: read 11/11 bytes from BIO#22341b0 [mem: 223b880] (BIO dump follows)  

=========== Open SSL Check

[sandeep@atgweb logs]$ openssl s_client -connect  192.168.244.129:443 -state -nbio  CONNECTED(00000003)  turning on non blocking io  SSL_connect:before/connect initialization  SSL_connect:SSLv2/v3 write client hello A  **SSL_connect:error in SSLv2/v3 read server hello A  write R BLOCK**  SSL_connect:SSLv3 read server hello A  depth=0 /C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com  **verify error:num=18:self signed certificate**  verify return:1  depth=0 /C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com  verify return:1  SSL_connect:SSLv3 read server certificate A  SSL_connect:SSLv3 read server key exchange A  SSL_connect:SSLv3 read server done A  SSL_connect:SSLv3 write client key exchange A  SSL_connect:SSLv3 write change cipher spec A  SSL_connect:SSLv3 write finished A  SSL_connect:SSLv3 flush data  SSL_connect:error in SSLv3 read finished A  SSL_connect:error in SSLv3 read finished A  read R BLOCK  SSL_connect:SSLv3 read finished A  read R BLOCK  ---  Certificate chain   0 s:/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com     i:/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com   1 s:/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=atgweb.localvm.com/emailAddress=sandeep_rohilla@agilent.com     i:/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=atgweb.localvm.com/emailAddress=sandeep_rohilla@agilent.com  ---  Server certificate  -----BEGIN CERTIFICATE-----  MIICvTCCAiYCCQDmbgmAHQHTpTANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC  VVMxEzARBgNVBAgTCkNhbGlmb3JuaWExEjAQBgNVBAcTCUN1cGVydGlubzEQMA4G  A1UEChMHQWdpbGVudDELMAkGA1UECxMCSVQxHzAdBgNVBAMTFnN0Z3d3dy5jb3Mu  YWdpbGVudC5jb20xKjAoBgkqhkiG9w0BCQEWG3NhbmRlZXBfcm9oaWxsYUBhZ2ls  ZW50LmNvbTAeFw0xNTA3MDgxNzM2MzZaFw0xNjA3MDcxNzM2MzZaMIGiMQswCQYD  VQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTESMBAGA1UEBxMJQ3VwZXJ0aW5v  MRAwDgYDVQQKEwdBZ2lsZW50MQswCQYDVQQLEwJJVDEfMB0GA1UEAxMWc3Rnd3d3  LmNvcy5hZ2lsZW50LmNvbTEqMCgGCSqGSIb3DQEJARYbc2FuZGVlcF9yb2hpbGxh  QGFnaWxlbnQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDET9X5cK3G  5Cgxz6RIo1irZAnqQQg2sMdDZ3nTyGLzOTNp90xhHp1+VC6ud5Hcivv2112+QCsA  MVVJIlkUs+bv7gyiPvviFOSyoi5KAiONkmyr5Vyy1XrVfsrCcF/JhYLltoghDl+Q  6ask51K3OUjVka6UrziAunuzgoR5QHavkQIDAQABMA0GCSqGSIb3DQEBBQUAA4GB  ABJqn06X+nvN8gZo9e+ywZhUlyhJIkrYeSS3tEpnBS4PRGyHe2egZKeu1oOquI4w  Sf1toICVVusCoLnSEw1lScfNEYk4oVdmAZBKGV1dHS8dIM7/UIQuIoRQlBQ6DkJp  uq9NHIZrmM0j1Mrj5gxRx0Yqz8U/pYm3XuEAgy7KTmYz  -----END CERTIFICATE-----  subject=/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com  issuer=/C=US/ST=California/L=Cupertino/O=Agilent/OU=IT/CN=stgwww.cos.agilent.com/emailAddress=sandeep_rohilla@agilent.com  ---  No client certificate CA names sent  ---  SSL handshake has read 2509 bytes and written 319 bytes  ---  New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA  Server public key is 1024 bit  Secure Renegotiation IS supported  Compression: NONE  Expansion: NONE  SSL-Session:      Protocol  : TLSv1      Cipher    : DHE-RSA-AES256-SHA      Session-ID: EE96B79CC47110B9A7B242691F1721DE77A3119F001CC88CE3B9BEFB4433D8D1      Session-ID-ctx:       Master-Key: 30CB866077089FD7198DBD08EEAD9A98C58E43563A191FA2FA8E7A967963E4A614F53045C8528B0978ABD0285ACC41FE      Key-Arg   : None      Krb5 Principal: None      Start Time: 1436378586      Timeout   : 300 (sec)      Verify return code: 18 (self signed certificate)  ---  SSL3 alert read:warning:close notify  closed  SSL3 alert write:warning:close notify  [sandeep@atgweb logs]$ cd ..  [sandeep@atgweb apache2]$ cd bin  [sandeep@atgweb bin]$ sudo ./apachectl -version  Server version: Apache/2.2.29 (Unix)  Server built:   May 21 2015 21:05:01  

=============== HTTPD-SSL.CONF File

#  #SSLRandomSeed startup file:/dev/random  512  #SSLRandomSeed startup file:/dev/urandom 512  #SSLRandomSeed connect file:/dev/random  512  #SSLRandomSeed connect file:/dev/urandom 512      Listen 443  NameVirtualHost *:443    #   Some MIME-types for downloading Certificates and CRLs  #  AddType application/x-x509-ca-cert .crt  AddType application/x-pkcs7-crl    .crl    SSLPassPhraseDialog  builtin    SSLSessionCache        "shmcb:/usr/local/apache2/logs/ssl_scache(512000)"  SSLSessionCacheTimeout  300  SSLMutex  "file:/usr/local/apache2/logs/ssl_mutex"    ##  ## SSL Virtual Host Context  ##    <VirtualHost _default_:443>    #   General setup for the virtual host  DocumentRoot "/usr/local/apache2/htdocs"  ServerName xxxxx:443  ServerAdmin you@example.com  ErrorLog "/usr/local/apache2/logs/error_log"  TransferLog "/usr/local/apache2/logs/access_log"    #   Enable/Disable SSL for this virtual host.  SSLEngine on    #   SSL Protocol support:  SSLProtocol all -SSLv2    #   SSL Cipher Suite:  SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5      #   Server Certificate:  SSLCertificateFile "/usr/local/apache2/conf/ssl.crt"    #   Server Private Key:  SSLCertificateKeyFile "/usr/local/apache2/conf/ssl.key"    #   Server Certificate Chain:  SSLCertificateChainFile "/home/sandeep/sandeep.crt"    <FilesMatch "\.(cgi|shtml|phtml|php)$">      SSLOptions +StdEnvVars  </FilesMatch>  <Directory "/usr/local/apache2/cgi-bin">      SSLOptions +StdEnvVars  </Directory>      BrowserMatch "MSIE [2-5]" \           nokeepalive ssl-unclean-shutdown \           downgrade-1.0 force-response-1.0    #   Per-Server Logging:  CustomLog "/usr/local/apache2/logs/ssl_request_log" \            "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"    SSLProxyEngine on  SSLProxyVerify none    SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown  CustomLog logs/ssl_request_log \     "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"    ProxyPass / http://www.google.com  ProxyPassReverse / http://www.google.com  </VirtualHost>   

=============== Modules Enabled

LoadModule authn_file_module modules/mod_authn_file.so  LoadModule authn_dbm_module modules/mod_authn_dbm.so  LoadModule authn_anon_module modules/mod_authn_anon.so  LoadModule authn_dbd_module modules/mod_authn_dbd.so  LoadModule authn_default_module modules/mod_authn_default.so  LoadModule authn_alias_module modules/mod_authn_alias.so  LoadModule authz_host_module modules/mod_authz_host.so  LoadModule authz_groupfile_module modules/mod_authz_groupfile.so  LoadModule authz_user_module modules/mod_authz_user.so  LoadModule authz_dbm_module modules/mod_authz_dbm.so  LoadModule authz_owner_module modules/mod_authz_owner.so  LoadModule authnz_ldap_module modules/mod_authnz_ldap.so  LoadModule authz_default_module modules/mod_authz_default.so  LoadModule auth_basic_module modules/mod_auth_basic.so  LoadModule auth_digest_module modules/mod_auth_digest.so  LoadModule file_cache_module modules/mod_file_cache.so  LoadModule cache_module modules/mod_cache.so  LoadModule disk_cache_module modules/mod_disk_cache.so  LoadModule mem_cache_module modules/mod_mem_cache.so  LoadModule dbd_module modules/mod_dbd.so  LoadModule dumpio_module modules/mod_dumpio.so  LoadModule echo_module modules/mod_echo.so  LoadModule reqtimeout_module modules/mod_reqtimeout.so  LoadModule ext_filter_module modules/mod_ext_filter.so  LoadModule include_module modules/mod_include.so  LoadModule filter_module modules/mod_filter.so  LoadModule substitute_module modules/mod_substitute.so  LoadModule charset_lite_module modules/mod_charset_lite.so  LoadModule deflate_module modules/mod_deflate.so  LoadModule ldap_module modules/mod_ldap.so  LoadModule log_config_module modules/mod_log_config.so  LoadModule log_forensic_module modules/mod_log_forensic.so  LoadModule logio_module modules/mod_logio.so  LoadModule env_module modules/mod_env.so  LoadModule mime_magic_module modules/mod_mime_magic.so  LoadModule cern_meta_module modules/mod_cern_meta.so  LoadModule expires_module modules/mod_expires.so  LoadModule headers_module modules/mod_headers.so  LoadModule ident_module modules/mod_ident.so  LoadModule usertrack_module modules/mod_usertrack.so  LoadModule unique_id_module modules/mod_unique_id.so  LoadModule setenvif_module modules/mod_setenvif.so  LoadModule version_module modules/mod_version.so  LoadModule proxy_module modules/mod_proxy.so  LoadModule proxy_connect_module modules/mod_proxy_connect.so  LoadModule proxy_ftp_module modules/mod_proxy_ftp.so  LoadModule proxy_http_module modules/mod_proxy_http.so  LoadModule proxy_scgi_module modules/mod_proxy_scgi.so  LoadModule proxy_ajp_module modules/mod_proxy_ajp.so  LoadModule proxy_balancer_module modules/mod_proxy_balancer.so  LoadModule ssl_module modules/mod_ssl.so  LoadModule mime_module modules/mod_mime.so  LoadModule dav_module modules/mod_dav.so  LoadModule status_module modules/mod_status.so  LoadModule autoindex_module modules/mod_autoindex.so  LoadModule asis_module modules/mod_asis.so  LoadModule info_module modules/mod_info.so  LoadModule cgi_module modules/mod_cgi.so  LoadModule dav_fs_module modules/mod_dav_fs.so  LoadModule dav_lock_module modules/mod_dav_lock.so  LoadModule vhost_alias_module modules/mod_vhost_alias.so  LoadModule negotiation_module modules/mod_negotiation.so  LoadModule dir_module modules/mod_dir.so  LoadModule imagemap_module modules/mod_imagemap.so  LoadModule actions_module modules/mod_actions.so  LoadModule speling_module modules/mod_speling.so  LoadModule userdir_module modules/mod_userdir.so  LoadModule alias_module modules/mod_alias.so  LoadModule rewrite_module modules/mod_rewrite.so  
#

I will really appreciate help on this. It has been days I have been hitting my head to the wall. Also I am new to this, if I have missed something basic my apologies.

Cheers, Sandeep

tunnel port 8080 over jumpserver using ssh - socks5 proxy?

Posted: 15 Jan 2022 08:04 AM PST

I have this setup:

LocalPC - Jumpserver - Webserver with page only accessible on this machine via

localhost:8080  

LocalPC and Webserver are not connected - Jumpserver has to be used. Jumpserver doesn't have access to the Webpage on Webserver

I want to use Firefox to view this webpage on LocalPC.

I know how to make socks proxy to Jumpserver - normally this is enough but not in this case

ssh -TD 8080 me@jumpserver  

and

I know how to tunnel one specific port over Jumpserver

ssh -f -N -q -L 2222:me@target:22 me@jumpserver  

But using the first method only makes a tunnel to Jumpserver and using the second method with ports 8081:me@webserver:8080 doesn't give error but results in 404 for

http://localhost:8081   

in firefox...

So how will I see the website on LocalPC?

And for security reasons: I need both connections encrypted and let no other users on Jumpserver use the tunnel.

(Sry for codeblocks - I am not allowed to write word localhost...)

Sametime Issue - User not available in group even when online

Posted: 15 Jan 2022 04:02 AM PST

We have two sametime servers(Replica) located in London and Singapore.

We also have a group (PL_ALL) in which singapore users and london users are added.

When user log in via London server as host, singapore users are not shown in group PL_ALL. But at the same time there is another group PL_SG_ALL which is working correctly. (EX: User XX is there in both group but only visible in one group)

Any idea what could be the problem with PL_ALL group alone?


Sametime Server Version: 8.5.2 & Sametime Client Version: 8.5.2

  • Main Group - PL_ALL
  • Sub Groups - PL_LN_ALL and PL_SG_ALL
  • Subgroup under PL_LN_ALL - PL_LN_Finance and more
  • Subgroup under PL_SG_ALL - PL_SG_Finance and more
  • User 1 - Is from Singapore and he is added in group PL_SG_Finance. He always connect to sametime client using Singapore sametime server as host
  • User 2 - Is from London. He always connect to sametime client using London sametime server as host

Scenario 1: User 2 logs in to sametime and added PL_ALL and PL_SG_Finance group to his list. problem is he can see User1 listed under PL_SG_Finance but not under PL_ALL

Scenario 2:User 2 log in to sametine using Singapore sametime server as host. He case see User 1 in PL_ALL and also in PL_SG_Finance

I hope this is clear

Any tricks for making sshfs authenticate only on write?

Posted: 15 Jan 2022 03:05 AM PST

There is seemingly a trick for creating read-only sshfs logins with the read only attribute is enforced by the remote's ~/.ssh/authorized_keys file.

You first create a program ~/.ssh/ro-sftp-server that runs sftp-server -R, which whatever other options were passed. You next set up your restricted ssh key as usual in the remote's ~/.ssh/authorized_keys file except adding a command restriction :

no-X11-forwarding,no-agent-forwarding,no-pty,command= "~/.ssh/ro-sftp-server" ssh-rsa ...  

Finally, you mount the directory by invoking sshfs.

sshfs -o ssh_command="ssh -i ~/.ssh/ro_key" \        -o sftp_server="~/.ssh/ro-sftp-server" \        -o idmap=mrmeow -o ro \        mrmeow@example.com:. ~/www/  

Great! Now how do I permit writes but only after asking for a password?

I could certainly make autofs mount a second read-write sshfs volume on demand using a password protected but unrestricted key.

I'd finally need unionfs, mhddfs, or similar to make this second directory appear over the first, except apparently I'd need them both honestly mounted. :(

Any ideas about how one should achieve this "password prompt on write"? functionality?

LSASS.exe trying to communicate over port 80

Posted: 15 Jan 2022 06:02 AM PST

We are running a standalone web server (Windows 2008 + IIS 7), and our antivirus is blocking LSASS.exe (C:\Windows\system32\lsass.exe) from making outbound connections over port 80.

Why is LSASS doing this? (Should I be worried?)

Extracting files from CloneZilla images

Posted: 15 Jan 2022 05:10 AM PST

Is there a way to browse CloneZilla images and extract individual files from them without restoring the whole image?

No comments:

Post a Comment