Thursday, December 30, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


fail2ban not blocking SSH connection

Posted: 30 Dec 2021 01:49 AM PST

I am using Fail2Ban v0.11.2 on a manjaro 21.2.0. First I copied the jail.conf like this

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local  

After that i changed some lines inside jail.local:

[...]  # "bantime" is the number of seconds that a host is banned.  bantime  = 1m    # A host is banned if it has generated "maxretry" during the last "findtime"  # seconds.  findtime  = 1m    # "maxretry" is the number of failures before a host get banned.  maxretry = 3    [...]    [sshd]    # To use more aggressive sshd modes set filter parameter "mode" in jail.local:  # normal (default), ddos, extra or aggressive (combines all).  # See "tests/files/logs/sshd" or "filter.d/sshd.conf" for usage example and details.  #mode   = normal  enabled = true  port    = 1234,ssh  logpath = /var/log/fail2ban.log  maxretry = 3  backend = %(sshd_backend)s  filter = sshd    [...]  

(And yes, my ssh port is something like 1234, so a custom one, and not 22). You can see my whole jail.local here.

After that i enabled the service and start everything:

sudo systemctl enable fail2ban.service  sudo systemctl start fail2ban.service  

sudo systemctl status fail2ban.service shows me " Active: active (running) since [...]". So it's working.

I can check the log via cat /var/log/fail2ban.log.

So what i expect: Take a different device with a different ip address as my "fail2ban server". Try a ssh connection to my "fail2ban server". I am using here a wrong private key for the ssh connection. So username is correct, password is correct, but private key is wrong. It should block this device after 3 attempts in under 1 minute. But nevertheless I can try multiple logins and my device won't get blocked. cat /var/log/fail2ban.log also don't show my login attempts:

2021-12-30 10:37:44,170 fail2ban.server         [592043]: INFO    Starting Fail2ban v0.11.2  2021-12-30 10:37:44,172 fail2ban.observer       [592043]: INFO    Observer start...  2021-12-30 10:37:44,182 fail2ban.database       [592043]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'  2021-12-30 10:37:44,184 fail2ban.jail           [592043]: INFO    Creating new jail 'sshd'  2021-12-30 10:37:44,204 fail2ban.jail           [592043]: INFO    Jail 'sshd' uses systemd {}  2021-12-30 10:37:44,207 fail2ban.jail           [592043]: INFO    Initiated 'systemd' backend  2021-12-30 10:37:44,209 fail2ban.filter         [592043]: INFO      maxLines: 1  2021-12-30 10:37:44,254 fail2ban.filtersystemd  [592043]: INFO    [sshd] Added journal match for: '_SYSTEMD_UNIT=sshd.service + _COMM=sshd'  2021-12-30 10:37:44,254 fail2ban.filter         [592043]: INFO      maxRetry: 3  2021-12-30 10:37:44,255 fail2ban.filter         [592043]: INFO      findtime: 60  2021-12-30 10:37:44,255 fail2ban.actions        [592043]: INFO      banTime: 60  2021-12-30 10:37:44,255 fail2ban.filter         [592043]: INFO      encoding: UTF-8  2021-12-30 10:37:44,257 fail2ban.jail           [592043]: INFO    Jail 'sshd' started  

Dynamic Google hosted DNS zone update by DHCP

Posted: 30 Dec 2021 01:49 AM PST

I have dns zone running in Google Cloud. I would like to integrate my ISC-DHCP with that zone by enabling automatic host registration into the zone.

I'm looking for some analogy to enable in Google Cloud this BIND feature:

   key DHCP_UPDATER {       algorithm HMAC-MD5.SIG-ALG.REG.INT;       secret pRP5FapFoJ95JEL06sv4PQ==;     };       zone "example.org" {          type master;          file "example.org.db";          allow-update { key DHCP_UPDATER; };     };  

and configuring the DHCP server to update the zone on the lease changes.

Any idea how to do that?

Is my router's NAT and Windows Firewall address or port restricted?

Posted: 30 Dec 2021 01:11 AM PST

I recently learnt about the different types of NAT as I was trying to learn UDP hole punching using Python sockets. The Python program was successful, but I had trouble finding the following information:

  • How do I find out whether my home router (TP-Link Archer VR400) is implementing an address or a port restricted cone NAT? Which is the most likely?
  • Does Windows Firewall (Windows 10) implement something more akin to an address or port restricted cone NAT? In other words, following a recently sent outbound UDP packet to (ip, port), will it accept inbound UDP packets from (ip, other_port) (address restricted) or will it only accept packets from (ip, port) (port restricted)?

Server 2022 SMTP Server issue

Posted: 30 Dec 2021 12:22 AM PST

This is a new installation of Server 2022 Standard 21H2. I'm trying to configure the SMTP Server so that a client application can send emails internally.

The first thing I noticed is that when I open IIS 6.0 Manager and right click the SMTP virtual server, it usually generates the following error:

SMTP Server Error

If I try often enough, I can get in and configure the settings. The next thing though is that whenever I attempt to send a message through the smtp server, the smtp service stops and the following event is logged:

Event Log

Can anyone suggest where I would start to troubleshoot this please?

What is a context for client server model?

Posted: 30 Dec 2021 01:51 AM PST

I am working on a C++ program running on linux machine. I am new to client server architecture. I recently get to know that the program I'm working on use something called context, so that the client can set the various configuration like access-mode etc to communicate with the server accordingly.

I want to know if it is something that is specific to my program or it is the same concept anywhere? And is it called context as a general term or it has some other common names? Any guide in the right direction will be helpful.

Update: I can feel that it is too broad of a concept to answer. I am particularly curious about where this concept fits in the client-server architecture?

Why the website over hte port 80 is able to access the api over port 81 in the same server but not outside by the website?

Posted: 30 Dec 2021 12:07 AM PST

We have deployed an angular website with webapi in the same server on port 80 and 81 which are allowed.

Now if we access the site within the server it accesses the api on port 81.

But when we access the website form outside it is working but the api are not getting accessed even though port 80 and 81 is allowed for those pcs over that IP address.

It's a weird issue.

Capping the bandwidth on my phone without having to root it [closed]

Posted: 29 Dec 2021 10:51 PM PST

I don't want to root my phone, although I've heard that is, in fact, the only way to natively cap my bandwidth usage. I have NetLimiter on my Windows PC, and I was thinking that I could route the traffic through my PC by setting it as the default gateway on my phone to limit the throughput to the external internet connection to keep it from crashing the internet for all of my neighbors, since we have a shared internet connection provided by our landlord. We live out in the mountains and it's rather remote. Does anyone know how to do this without turning my computer into a security nightmare?

How to use Mod Rewrite to access non-document-root folder files?

Posted: 29 Dec 2021 11:33 PM PST

I have the following structure for my website codes at /var/www/html

-- files  ---- test.txt    -- app  ---- SomePhpCodes.php  ---- SomePhpCodes.php    -- public  ---- index.php  ---- .htaccess  

The document root is set to be "/var/www/html/public" but I also want to have files accessible via path "http://mywebsite/files/test.txt".

I think this must be possible using mod_rewrite in the .htaccess file inside public but I am struggling to do so.

I tried to add a rule to the default .htaccess that is provided by the Laravel framework. The current .htaccess looks like the following:

<IfModule mod_rewrite.c>      <IfModule mod_negotiation.c>          Options -MultiViews -Indexes      </IfModule>        RewriteEngine On        # Handle Authorization Header      RewriteCond %{HTTP:Authorization} .      RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]        # Redirect Trailing Slashes If Not A Folder...      RewriteCond %{REQUEST_FILENAME} !-d      RewriteCond %{REQUEST_URI} (.+)/$      RewriteRule ^ %1 [L,R=301]        # The following rule is added by me      RewriteRule files/(.*\.txt) /var/www/html/files/$1 [L]        # Send Requests To Front Controller...      RewriteCond %{REQUEST_FILENAME} !-d      RewriteCond %{REQUEST_FILENAME} !-f      RewriteRule ^ index.php [L]  </IfModule>  

The current result is Internal Server Error when trying to access "http://mywebsite/files/test.txt". I have a feeling I might have missed some settings to make the "files" folder public but I don't have any idea how to do so.

I am stuck and if anyone can help me I will appreciate.

How can I solve the error to open google chrome on the server?

Posted: 30 Dec 2021 12:30 AM PST

I have run the command google-chrome on the server and I have got this error. https://i.imgur.com/BmxPj2b.png

When I tried this solution. https://i.imgur.com/4G8O6nN.png and I have got this error https://i.imgur.com/cl4tJHj.png

How to configure different relay, masquerade and/or smarthost settings, based on recipient and/or sender address

Posted: 29 Dec 2021 09:27 PM PST

My Google-fu is letting me down with this one.

Is it possible to EITHER/preferably configure sendmail to...

  • masquerade the From address of any mail intended for recipient@aaaa.com to appear as service@aaaa.com and use smtp.aaaa.com to send it (with authentication)
  • AND masquerade any mail intended for recipient@bbbb.com (including those that share To, CC or BCC with above) to appear as service@bbbb.com and use smtp.bbbb.com to send it (with different auth user/pwd)
  • AND use smtp.cccc.com for any other non-local address (again, with different auth)
  • AND store mail for local users, as it currently is in /var/spool/mail/$USER

...OR configure sendmail to...

  • use smtp.aaaa.com (with auth) to send any mail appearing to be From service@aaaa.com
  • AND use smtp.bbbb.com (different auth) to send any mail appearing to be From service@bbbb.com
  • AND use smtp.cccc.com (different auth) for any other non-local address
  • AND store mail for local users, as it currently is in /var/spool/mail/$USER

...?

The sendmail server is not exposed externally, so this isn't a security/open-relay concern -- just trying to support a local app with limited mail config, that we're hoping to send to different parties with differing (mostly vanity) mail requirements.

I'm not a sendmail guy, by trade, and am only targeting sendmail because I understand it to be the de-facto on AIX7.1 and AIX7.2, which are the current- and next-validated OS for the proprietary app.

The server appears only to have an /etc/mail/sendmail.cf file, rather than a sendmail.mc (although a lot of /usr/samples/*/sendmail/*/*.m4). The server reports Version AIX7.1/8.14.4.

I think I need to enable mailertable (for the relay), genericstable (for the masquerade) and authinfo (for the auth), and use some combination of those to get what I'm after but, without a sendmail.mc, I'm at a loss as to how I do that.

lvmcache/dm-cache writeback cache full performance

Posted: 30 Dec 2021 01:29 AM PST

I have a SSD writeback cache in front of a HDD, set up through lvmcache (so a dm-cache). When the cache LV is not full (Data% column in lvs < 100.00%), writes go to the cache device (monitored via dstat). However, when the cache LV is full (Data% = 100.00%), writes go directly to the HDD, essentially becoming a writethrough cache. Blocks do not get evicted from the SSD cache, even after some time, and performance drops. When I try reading recently read data from the cached LV, reads are from the SSD, so I assume the entire SSD has now become a read cache. Is this expected behavior for dm-cache's write cache, even in writeback mode? Is there no reserved space for writes? This seems like quite a poor design as essentially users can only write one cache LV's worth of data before the cache becomes a writethrough cache.

My understanding is that dm-cache uses the mq eviction algorithm, but that only applies to read caching and thus is irrelevant to the write caching issue I am observing.

Is there a way to reserve space for a write cache, or use both a dm-writecache (which I understand will not do any read caching) and a dm-cache at the same time?

Enabling PHPMYADMIN Logging & Fail2ban Default Filter

Posted: 29 Dec 2021 08:26 PM PST

I am on Debian 10.5 LAMP with ISPConfig, running PHPMYADMIN 4.9.0.1.

I installed phpmyadmin following this tutorial I can only guess that somehow ISPConfig may be interrupting something.

In any case, I am trying to setup the default phpmyadmin-syslog.conf filter for fail2ban to protect phpmyadmin.

Problem:
pma logging doesn't appear to work according to documentation.

I have tried 3 methods to enable logging:

in my /usr/share/phpmyadmin/config.inc.php i have added:

$cfg['AuthLog'] = 'auto';

Which should output failed login attempts to syslog or php according to docs
https://docs.phpmyadmin.net/en/latest/config.html

I tried
current setting
$cfg['AuthLog'] = 'syslog';

However, neither /var/log/auth.log , nor /var/log/syslog logged failed login attempts.

I also tried:
$cfg['AuthLog'] = '/var/log/phpmyadmin-auth.log';

and gave permissions to the log to www-data user using (note: unsure if this correct, pma is controluser)

#chown www-data:www-data /var/log/phpmyadmin-auth.log and
#chmod 755 /var/log/phpmyadmin-auth.log

My /etc/fail2ban/jail.local file contains:

[phpmyadmin-auth]  enabled = true  port = https,https  filter = phpmyadmin-syslog  logpath = /var/log/syslog  maxretry = 3  

and the default /etc/fail2ban/filter.d/phpmyadmin-syslog.conf contains:

# Fail2Ban fitler for the phpMyAdmin-syslog  #  [INCLUDES]  before = common.conf    [Definition]  _daemon = phpMyAdmin  failregex = ^%(__prefix_line)suser denied: (?:\S+|.*?) \(mysql-denied\) from <HOST>\s*$  ignoreregex =  # Author: Pavel Mihadyuk  # Regex fixes: Serg G. Brester  

(no useful tip for enabling the phpyadmin logging)

Anybody know what I am missing?

Group Policy Management Tools

Posted: 29 Dec 2021 11:02 PM PST

My company is currently using NetIQ GPAdmin from Microfocus. It has been a nightmare and is apparently going EOL. We are looking at moving to a different utility and they are attempting to sell us on "Universal Group Policy" by Microfocus. As we are looking at a new implementation, we have the opportunity to test other options and perhaps go a different route. Looking for any suggestions and would ask for reasoning/experiences with the product.

My questions are as follows:

  1. If you have used Universal Group Policy, what were your experiences and how does it compare to anything else you have looked at or used in the past.
  2. If you have not used or are not using Universal Group Policy, what utility are you using and what are the pro's and con's of it? What has your experience been with rolling back changes and using groups to control change approvals/scheduling.
  3. Do you have any recommendations for a mid-to-large setting, 500+ servers and multi-domain structure with a possible foot in managing cloud and private GPO.

My thanks ahead of time for any answers given here.

Windows Active Directory - Change Time Server Settings after PDC/FSMO moved

Posted: 29 Dec 2021 11:37 PM PST

We have configured a GPO to configure our PDC server like is described here (and many other blogs) https://docs.microsoft.com/en-us/archive/blogs/nepapfe/its-simple-time-configuration-in-active-directory

It means that our GPO uses the filter that applies only to main PDC to set NTP settings as primary time source in our AD Domain. Select * from Win32_ComputerSystem where DomainRole = 5

When FSMO roles are moved to another DC, these time/ntp settings are applied to new DC that acts as PDC. But after the role is moved, the old PDC is still configured with OLD ntp/time settings.

To correct that situation we are applying this manual command in the OLD PDC w32tm /config /syncfromflags:domhier /update

But we would like to do it automatically, how can we do it? , how can we automatically reset the previous settings that still remains on the OLD PDC ?

Thanks in advance

ansible block-rescue is not working with handler

Posted: 29 Dec 2021 10:35 PM PST

I am trying the following playbook. But even after getting the error from handler, the rescue section is not working.

handlers:   - name: port status     shell: netstat -nltp | grep {{ app1_port }}     register: port     listen: port_status   - name: display port status     debug: var=port.stdout_lines     listen: port_status  tasks:   - name: Reload service if checks fail     block:      - name: Check process status        shell: ps -aux | grep {{ app1 }} | grep -v grep        notify: port status     rescue:      - name: fetching proc ids        shell: ps -aux | grep {{ app2 }} | grep -v grep | awk '{print $2}'        register: result        ignore_errors: True      - name: Reloading config        shell: "kill -HUP {{ item }}"         with_items:          - "{{ result.stdout_lines }}"         notify: port_status  

Bellow is the output I'm getting while running.

TASK [Check service status] ********************************************************************************************************* changed: [localhost]

RUNNING HANDLER [port status] **************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": "netstat -nltp | grep 3306", "delta": "0:00:00.017951", "end": "2019-03-13 22:04:41.024950", "msg": "non-zero return code", "rc": 1, "start": "2019-03-13 22:04:41.006999", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT ***************************************************************************************************************** to retry, use: --limit @/home/sachin/ansible.retry

PLAY RECAP ***************************************************************************************************************** localhost : ok=1 changed=1 unreachable=0 failed=1

Could not find an installable distribution at '/home/customize.iso'

Posted: 30 Dec 2021 01:00 AM PST

Below command work well for standard iso provided by Ubuntu but facing issue when customize iso is used.

virt-install --name=vm--vcpu=18 --ram=65536 --location=/home/customize.iso --network bridge=br0 --network bridge=br0 --disk path=/VMs/harddisk/vm.img -x "console=ttyS0" --nographics -v --debug

Debug log:

Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (cli:265) Launched with command line: /usr/share/virt-manager/virt-install --name=test --vcpu=18 --ram=65536 --location=/var/lib/libvirt/images/test-0_2_4.iso --network bridge=br0 --network bridge=br0 --disk path=/VMs/wdcsbm/wdcsbm.img -x console=ttyS0 --nographics -v --debug [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (cli:279) Requesting libvirt URI default [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (cli:282) Received libvirt URI qemu:///system [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:358) Requesting virt method 'hvm', hv type 'default'. [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:583) Received virt method 'kvm' [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:584) Hypervisor name is 'hvm' [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:270) Distilled --network options: ['bridge=br0', 'bridge=br0'] [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:316) --graphics compat generated: none [Thu, 07 Mar 2019 05:17:54 virt-install 17496] DEBUG (virt-install:183) Distilled --disk options: ['path=/VMs/test/test.img'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (distroinstaller:283) installer.detect_distro returned=None [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (guest:251) Setting Guest.os_variant to 'None' [Thu, 07 Mar 2019 05:17:55 virt-install 17496] WARNING (virt-install:545) No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results. [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (virt-install:697) Guest.has_install_phase: True

Starting install... [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:56) Using scratchdir=/var/lib/libvirt/boot [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:503) Finding distro store for location=/var/lib/libvirt/images/test-0_2_4.iso [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:345) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-x', '/.treeinfo'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:89) Fetching URI: /.treeinfo Retrieving file .treeinfo... | 0 B 00:00:00 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:164) Saved file to /var/lib/libvirt/boot/virtinst-.treeinfo.d9lSWN [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:404) Did not find 'family' section in treeinfo [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:345) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-x', '/content'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:89) Fetching URI: /content Retrieving file content... | 0 B 00:00:00 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:1144) No treearch found in uri, defaulting to arch=i386 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:357) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-f'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/current/images/MANIFEST) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/daily/MANIFEST) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/.disk/info) returning True [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:345) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-x', '/.disk/info'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:89) Fetching URI: /.disk/info Retrieving file info... | 51 B 00:00:00 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:1191) Regex didn't match, not a Debian distro [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/Fedora) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/SL) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/CentOS) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/VERSION) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/.disk/info) returning True [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:345) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-x', '/.disk/info'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:89) Fetching URI: /.disk/info Retrieving file info... | 51 B 00:00:00 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:1378) Regex didn't match, not a ALT Linux distro [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:1144) No treearch found in uri, defaulting to arch=i386 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/current/images/MANIFEST) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/daily/MANIFEST) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/.disk/info) returning True [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:345) Running isoinfo: ['isoinfo', '-J', '-i', '/var/lib/libvirt/images/test-0_2_4.iso', '-x', '/.disk/info'] [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:89) Fetching URI: /.disk/info Retrieving file info... | 51 B 00:00:00 [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:1191) Regex didn't match, not a Ubuntu distro [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/Server) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/Client) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/RedHat) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/images/pxeboot/vmlinuz) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/ppc/ppc64/vmlinuz) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/images/boot.iso) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/boot/boot.iso) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/current/images/netboot/mini.iso) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/install/images/boot.iso) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (urlfetcher:144) hasFile(/) returning False [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (cli:317) File "/usr/share/virt-manager/virt-install", line 1008, in sys.exit(main()) File "/usr/share/virt-manager/virt-install", line 1002, in main start_install(guest, options) File "/usr/share/virt-manager/virt-install", line 728, in start_install fail(e, do_exit=False) File "/usr/share/virt-manager/virtinst/cli.py", line 317, in fail logging.debug("".join(traceback.format_stack()))

[Thu, 07 Mar 2019 05:17:55 virt-install 17496] ERROR (cli:318) Could not find an installable distribution at '/var/lib/libvirt/images/test-0_2_4.iso': The URL could not be accessed, maybe you mistyped?

The location must be the root directory of an install tree. See virt-install man page for various distro examples. [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (cli:320) Traceback (most recent call last): File "/usr/share/virt-manager/virt-install", line 707, in start_install transient=options.transient) File "/usr/share/virt-manager/virtinst/guest.py", line 480, in start_install self._prepare_install(meter, dry) File "/usr/share/virt-manager/virtinst/guest.py", line 313, in _prepare_install self.installer.prepare(self, meter) File "/usr/share/virt-manager/virtinst/installer.py", line 200, in prepare self._prepare(guest, meter) File "/usr/share/virt-manager/virtinst/distroinstaller.py", line 220, in _prepare self._prepare_kernel_url(guest, fetcher) File "/usr/share/virt-manager/virtinst/distroinstaller.py", line 127, in _prepare_kernel_url store = self._get_store(guest, fetcher) File "/usr/share/virt-manager/virtinst/distroinstaller.py", line 114, in _get_store self._cached_store = urlfetcher.getDistroStore(guest, fetcher) File "/usr/share/virt-manager/virtinst/urlfetcher.py", line 559, in getDistroStore (fetcher.location, extramsg))) ValueError: Could not find an installable distribution at '/var/lib/libvirt/images/test-0_2_4.iso': The URL could not be accessed, maybe you mistyped?

The location must be the root directory of an install tree. See virt-install man page for various distro examples. [Thu, 07 Mar 2019 05:17:55 virt-install 17496] DEBUG (cli:331) Domain installation does not appear to have been successful. If it was, you can restart your domain by running: virsh --connect qemu:///system start wdcsbm otherwise, please restart your installation. Domain installation does not appear to have been successful. If it was, you can restart your domain by running: virsh --connect qemu:///system start wdcsbm otherwise, please restart your installation. root@kvm01:/media/cdrom#

Any help from anyone will be appreciated.

Why does AWS Lambda need to pass ecsTaskExecutionRole to ECS task

Posted: 29 Dec 2021 10:02 PM PST

I am writing an AWS Lambda function to trigger an ECS Fargate task. I am following the example provided at Run tasks with AWS Fargate and Lambda. While my setup works, there is one of the parts involving IAM roles that I do not understand.

One of the steps is to create an ECS task. I create that task with its "Task execution IAM role" left at ecsTaskExecutionRole. According to the info on the ECS task setup page, the "Task execution IAM role" is

The role that authorizes Amazon ECS to pull private images and publish logs for your task. This takes the place of the EC2 Instance role when running tasks.

Next, I create the Lambda function. Part of that Lambda function setup is the creation of another IAM role because, according to the "Run tasks with AWS Fargate and Lambda" page,

The Lambda would need IAM role with 2 policies - one to run the task, and second to pass the ecsTaskExecutionRole to the task.

The role looks like this (I have compressed the white-space to save space):

{   "Version": "2012-10-17",      "Statement": [          {   "Sid": "Stmt1512361420000",              "Effect": "Allow",              "Action": [                  "ecs:RunTask"                   ],              "Resource": [ "*" ]          },          {   "Sid": "Stmt1512361593000",              "Effect": "Allow",              "Action": [ "iam:PassRole" ],              "Resource": [ "arn:aws:iam::************:role/ecsTaskExecutionRole" ]          }      ]  }  

What I don't understand is why the Lambda function has to have this iam:PassRole permission. Why does the Lambda function have to "pass the ecsTaskExecutionRole to the task"? Doesn't the ECS task get that role assigned automatically when it runs due to the fact that I set "Task execution IAM role" to ecsTaskExecutionRole? If not, then what is the point of the "Task execution IAM role" setting?

Postfix modify email in queue and re-inject

Posted: 29 Dec 2021 10:34 PM PST

I've a legacy webapp that sends mail to an external SMTP (specified by a conf file). These emails came out from "noreply" account and were correctly delivered. Now, we want mails came out from "user@domain.tld" but unfortunately is not possible modify the app. From the website, we can recognize the logged in user but we're unable to set it before the send.. So we've to intercept the mail before they arrive to the external SMTP. For this, we've configured a local Postfix to substitute to the external SMTP. It have to accept mails, change the sender (the new one will be in the Subject between some special chars) and re-route the mail to the official external SMTP. All mails have attachments (doc/pdf file). Is there any direct commands/method to do this?

At high level, the solution I thinked about is based on: hold the queue, postcat the messages, change the sender by a script, send the mail by mail/mailx command.. Thanks.

Nginx isn't executing php files, while using H5AI

Posted: 29 Dec 2021 11:07 PM PST

I'm trying to install H5AI on my Debian 8 server, I'm using NGINX and PHP7, when I'm trying to access to my adress (in this case share.chaton-poulpe-pieuvre.tk), it makes me download the file, it isn't executing it..

My H5AI files are on /usr/share/nginx/share, in this directory, there is the public and the private directory of H5AI. That's the 0.29.0 of H5AI.

Here is my nginx .conf file for H5AI :

server {          listen       80;          server_name  share.chaton-poulpe-pieuvre.tk;          return 301 https://share.chaton-poulpe-pieuvre.tk$request_uri;  }  server {          server_name  share.chaton-poulpe-pieuvre.tk;          listen 443 ssl http2;          root /usr/share/nginx/share;          index index.html index.php /public/index.php;          ssl_certificate /etc/letsencrypt/live/chaton-poulpe-pieuvre.tk/fullchain.pem;          ssl_certificate_key /etc/letsencrypt/live/chaton-poulpe-pieuvre.tk/privkey.pem;          ssl_trusted_certificate /etc/letsencrypt/live/chaton-poulpe-pieuvre.tk/chain.pem;            ssl_protocols TLSv1.2;          ssl_ecdh_curve secp384r1;          ssl_ciphers EECDH+AESGCM:EECDH+AES;          ssl_prefer_server_ciphers on;          ssl_stapling on;          ssl_stapling_verify on;          resolver 80.67.169.12 80.67.169.40 valid=300s;          resolver_timeout 5s;             location ~ \.php$ {              try_files $uri =404;              fastcgi_pass unix:/run/php/php7.0-fpm.sock;              fastcgi_index index.php;              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;              include fastcgi_params;              fastcgi_intercept_errors on;              fastcgi_ignore_client_abort off;              fastcgi_connect_timeout 60;              fastcgi_send_timeout 180;              fastcgi_read_timeout 180;              fastcgi_buffers 4 256k;              fastcgi_buffer_size 128k;              fastcgi_busy_buffers_size 256k;              fastcgi_temp_file_write_size 256k;             }            autoindex on;    }  

Thanks for any help guys..

Windows Server 2012 Proxy Setting Using Group Policy

Posted: 29 Dec 2021 10:02 PM PST

I want to set proxy for all users in local windows server 2012 by using local group policy? My windows is not a domain controller. How can i do that?

Can't start any service after installing iRedMail on CentOS

Posted: 30 Dec 2021 01:00 AM PST

I have attempted to install iRedMail on one of my servers. After reboot, I can't start any service anymore.

[root@mx ~]# systemctl start httpd    ** (pkttyagent:16323): WARNING **: Unable to register authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files  Error registering authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files (g-dbus-error-quark, 2)  [root@mx ~]# systemctl start mysqld    ** (pkttyagent:16348): WARNING **: Unable to register authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files  Error registering authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files (g-dbus-error-quark, 2)  Failed to start mysqld.service: Unit mysqld.service failed to load: No such file or directory.  [root@mx ~]#  

Does anyone know how to get this sorted out?

Update Re-installing and starting polkit gave me this:

[root@mx ~]# systemctl start polkit  Error getting authority: Error initializing authority: Exhausted all available authentication mechanisms (tried: EXTERNAL, DBUS_COOKIE_SHA1, ANONYMOUS) (available: EXTERNAL, DBUS_COOKIE_SHA1, ANONYMOUS) (g-io-error-quark, 0)  Job for polkit.service failed because the control process exited with error code. See "systemctl status polkit.service" and "journalctl -xe" for details.  [root@mx ~]# systemctl status polkit.service  ● polkit.service - Authorization Manager     Loaded: loaded (/usr/lib/systemd/system/polkit.service; static; vendor preset: enabled)     Active: failed (Result: exit-code) since Thu 2016-07-21 21:24:45 JST; 20s ago       Docs: man:polkit(8)    Process: 2478 ExecStart=/usr/lib/polkit-1/polkitd --no-debug (code=exited, status=1/FAILURE)   Main PID: 2478 (code=exited, status=1/FAILURE)    Jul 21 21:24:45 mx.076.wtf systemd[1]: Starting Authorization Manager...  Jul 21 21:24:45 mx.076.wtf systemd[1]: polkit.service: main process exited, code=exited, status=1/FAILURE  Jul 21 21:24:45 mx.076.wtf systemd[1]: Failed to start Authorization Manager.  Jul 21 21:24:45 mx.076.wtf systemd[1]: Unit polkit.service entered failed state.  Jul 21 21:24:45 mx.076.wtf systemd[1]: polkit.service failed.  

Getting "Can't create/write to file '/var/lib/mysql/is_writable'" using docker (inside vagrant on OS X)

Posted: 29 Dec 2021 11:07 PM PST

I am trying to use docker-compose/docker inside a vagrant machine hosted on OS X. Running 'docker-compose up' always fails with

mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)

I can manually create the file just fine, however. (Using touch and sudo -g vagrant touch)

Does anyone know where to look to debug this?


Log:

db_1  | Initializing database  db_1  | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)  db_1  | 2016-05-21T22:55:38.877522Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.  db_1  | 2016-05-21T22:55:38.877799Z 0 [ERROR] Aborting  

My docker-compose.yaml:

version: '2' services:   db:      privileged: true      image: mysql      volumes:        - "./.data/db:/var/lib/mysql"      restart: always      environment:        MYSQL_ROOT_PASSWORD: wordpress        MYSQL_DATABASE: wordpress        MYSQL_USER: wordpress        MYSQL_PASSWORD: wordpress  

My Vagrantfile:

# -*- mode: ruby -*-  # vi: set ft=ruby :    # All Vagrant configuration is done below. The "2" in Vagrant.configure  # configures the configuration version (we support older styles for  # backwards compatibility). Please don't change it unless you know what  # you're doing.  Vagrant.configure(2) do |config|    # The most common configuration options are documented and commented below.    # For a complete reference, please see the online documentation at    # https://docs.vagrantup.com.      # Every Vagrant development environment requires a box. You can search for    # boxes at https://atlas.hashicorp.com/search.    config.vm.box = "ubuntu/trusty64"    # config.vm.box = "debian/jessie64"      # Disable automatic box update checking. If you disable this, then    # boxes will only be checked for updates when the user runs    # `vagrant box outdated`. This is not recommended.    # config.vm.box_check_update = false      # Create a forwarded port mapping which allows access to a specific port    # within the machine from a port on the host machine. In the example below,    # accessing "localhost:8080" will access port 80 on the guest machine.    # config.vm.network "forwarded_port", guest: 80, host: 8080      # Create a private network, which allows host-only access to the machine    # using a specific IP.    # config.vm.network "private_network", ip: "192.168.33.10"      # Create a public network, which generally matched to bridged network.    # Bridged networks make the machine appear as another physical device on    # your network.    # config.vm.network "public_network"      # Share an additional folder to the guest VM. The first argument is    # the path on the host to the actual folder. The second argument is    # the path on the guest to mount the folder. And the optional third    # argument is a set of non-required options.    # config.vm.synced_folder "../data", "/vagrant_data"      # Provider-specific configuration so you can fine-tune various    # backing providers for Vagrant. These expose provider-specific options.    # Example for VirtualBox:    #    # config.vm.provider "virtualbox" do |vb|    #   # Display the VirtualBox GUI when booting the machine    #   vb.gui = true    #    #   # Customize the amount of memory on the VM:    #   vb.memory = "1024"    # end    #    # View the documentation for the provider you are using for more    # information on available options.      # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies    # such as FTP and Heroku are also available. See the documentation at    # https://docs.vagrantup.com/v2/push/atlas.html for more information.    # config.push.define "atlas" do |push|    #   push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"    # end      # Enable provisioning with a shell script. Additional provisioners such as    # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the    # documentation for more information about their specific syntax and use.    # config.vm.provision "shell", inline: <<-SHELL    #   sudo apt-get update    #   sudo apt-get install -y apache2    # SHELL      #####################################################################    # Custom Configuration      config.vm.define "dev" do |dev|        # if File.directory?("~/Dev")      #   dev.vm.synced_folder "~/Dev", "/vagrant/Dev"      # end      # custom: above does not work for symlinks      dev.vm.synced_folder "~/Dev", "/home/vagrant/Dev"  #    dev.vm.synced_folder "~/Dev/docker", "/docker"        dev.vm.provider "virtualbox" do |vb|        vb.gui = false        vb.memory = "2048"      end        dev.vm.provision "shell",                          run: "always",                          inline: <<-SHELL        pushd /vagrant/conf        chmod 755 setup.sh && ./setup.sh        popd      SHELL        dev.ssh.forward_x11 = true        # Install the caching plugin if you want to take advantage of the cache      # $ vagrant plugin install vagrant-cachier      if Vagrant.has_plugin?("vagrant-cachier")        # Configure cached packages to be shared between instances of the same base box.        # More info on http://fgrehm.viewdocs.io/vagrant-cachier/usage        config.cache.scope = :machine      end    end    end  

rsync files to a kubernetes pod

Posted: 30 Dec 2021 12:18 AM PST

I need to rsync a file tree to a specific pod in a kubernetes cluster. It seems it should be possible if only one can convince rsync that kubectl acts sort of like rsh. Something like:

rsync --rsh='kubectl exec -i podname -- ' -r foo x:/tmp  

... except that this runs into problems with x since rsync assumes a hostname is needed:

exec: "x": executable file not found in $PATH  

I can not seem to find a method to help rsync construct the rsh command. Is there a way to do this? Or some other method by which relatively efficient file transfer can be achieved over a pipe?

(I am aware of gcloud compute copy-files, but it can only be used onto the node?)

How to allow SonicOS SSLVPN IP Range to access a single WAN Host

Posted: 29 Dec 2021 09:05 PM PST

We have a SonicWall TZ 205 W (SonicOS Enhanced 5.8.1.15-48o) Network Security Appliance.

Users from outside take an SSLVPN connection with NetExtender. They can access resources in the LAN just fine.

We have also configured a S2S VPN connection from the SonicWall to Azure Virtual network. The users of the SSLVPN have been added with this access and it works just fine.

The Problem:

However, we also have an SQL Azure database which we would like to route through the SSLVPN. It cannot be added to the Azure Virtual Network because Microsoft don't support this thus it needs to reside in the WAN zone.

  1. We already have added the SQL Azure host's IP address to the SSLVPN client routes.

  2. We already have a following firewall access rule:

    • Source: SSLVPN IP Pool
    • Destination: SQL Azure (Address Object: Host, Zone: WAN)
    • Service: Any
    • Action: Allow
  3. Traffic statistics for this rule show 0 Tx and Rx bytes.

If the SQL Azure was behind a VPN connection, it'd be simply a matter to add the VPN access to the SSLVPN users but how do I make this SonicWall allow connections from SSLVPN IP Range to a host in the WAN Zone?

How to get all fingerprints for .ssh/authorized_keys(2) file

Posted: 30 Dec 2021 01:26 AM PST

Is there a simple way to get a list of all fingerprints entered in the .ssh/authorized_keys || .ssh/authorized_keys2 file?

ssh-keygen -l -f .ssh/authorized_keys   

will only return fingerprint of first line / entry / publickey

hack with awk:

awk 'BEGIN {       while (getline < ".ssh/authorized_keys") {          if ($1!~"ssh-(r|d)sa") {continue}          print "Fingerprint for "$3          system("echo " "\""$0"\"> /tmp/authorizedPublicKey.scan; \              ssh-keygen -l -f /tmp/authorizedPublicKey.scan; \              rm /tmp/authorizedPublicKey.scan"          )      }  }'  

but is there an easier way or ssh command I didn't find?

How to determine if I'm logged in via SSH?

Posted: 29 Dec 2021 09:11 PM PST

I'm currently setting up a fairly complex bash configuration which shall be used on multiple machines. I try to find out if it is possible to determine whether I'm logged in via SSH or on a local machine. This way I could, for instance, set some aliases depending on that fact. Like aliasing halt to restart since stopping a remote server might not be the best thing to do.

What I know so far is, that the environment variable SSH_CLIENT is set when I logged in via ssh. Unfortunately, this variable is discarded when I start a super user shell with sudo -s. I also know that I can pass a parameter to sudo that instructs sudo to copy all my environment variables to the new shell environment, but if I don't want to do this, is there an other way?

No comments:

Post a Comment