Monday, March 22, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Double VPN not working

Posted: 22 Mar 2021 09:58 PM PDT

I have setup OpenVpn in my raspberry pi and it works correctly, I can log in to my raspberry pi from my cellphone. The problem comes when I activate my paid vpn (windscribe) with windscribe connect. After that I can no longer reach my raspberry with my cellphone.

I've been trying a lot with the iptables with no success, crating forward rules for interfaces, tunnels, and a lot of combinations, but nothing seems to work. At the end I reset everything.

here are my configurations.

sudo iptables -t nat -S  -P PREROUTING ACCEPT  -P INPUT ACCEPT  -P POSTROUTING ACCEPT  -P OUTPUT ACCEPT  -A POSTROUTING -s 10.8.0.0/24 -o wlan0 -m comment --comment openvpn-nat-rule -j MASQUERADE  -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE    pi@raspberrypi:~ $ sudo iptables -S  -P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT DROP  -A OUTPUT ! -o tun+ -p tcp -m tcp --dport 53 -j DROP  -A OUTPUT ! -o tun+ -p udp -m udp --dport 53 -j DROP  -A OUTPUT -d 192.168.0.0/16 -j ACCEPT  -A OUTPUT -d 10.0.0.0/8 -j ACCEPT  -A OUTPUT -d 172.16.0.0/12 -j ACCEPT  -A OUTPUT -d 104.20.26.217/32 -j ACCEPT  -A OUTPUT -d 104.20.27.217/32 -j ACCEPT  -A OUTPUT -d 172.67.17.175/32 -j ACCEPT  -A OUTPUT -d 104.21.93.29/32 -j ACCEPT  -A OUTPUT -d 172.67.203.127/32 -j ACCEPT  -A OUTPUT -d 104.21.53.216/32 -j ACCEPT  -A OUTPUT -d 172.67.219.39/32 -j ACCEPT  -A OUTPUT -d 172.67.189.40/32 -j ACCEPT  -A OUTPUT -d 104.21.65.74/32 -j ACCEPT  -A OUTPUT -o tun+ -j ACCEPT  -A OUTPUT -d 127.0.0.1/32 -j ACCEPT  -A OUTPUT -d 209.58.129.121/32 -j ACCEPT    pi@raspberrypi:~ $ ifconfig  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.0.111  netmask 255.255.255.0  broadcast 192.168.0.255          ether b8:27:eb:ec:6a:4b  txqueuelen 1000  (Ethernet)          RX packets 19989  bytes 21885907 (20.8 MiB)          RX errors 160  dropped 4  overruns 0  frame 0          TX packets 11508  bytes 1206589 (1.1 MiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536          inet 127.0.0.1  netmask 255.0.0.0          loop  txqueuelen 1000  (Local Loopback)          RX packets 618  bytes 201828 (197.0 KiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 618  bytes 201828 (197.0 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500          inet 10.8.0.1  netmask 255.255.255.0  destination 10.8.0.1          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 0  bytes 0 (0.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500          inet 10.120.138.29  netmask 255.255.254.0  destination 10.120.138.29          unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)          RX packets 164  bytes 32755 (31.9 KiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 961  bytes 114896 (112.2 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500          ether b8:27:eb:b9:3f:1e  txqueuelen 1000  (Ethernet)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 0  bytes 0 (0.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    pi@raspberrypi:~ $ ip route list  0.0.0.0/1 via 10.120.138.1 dev tun1  default via 192.168.0.1 dev eth0 src 192.168.0.111 metric 202  10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1  10.120.138.0/23 dev tun1 proto kernel scope link src 10.120.138.29  128.0.0.0/1 via 10.120.138.1 dev tun1  192.168.0.0/24 dev eth0 proto dhcp scope link src 192.168.0.111 metric 202  209.58.129.121 via 192.168.0.1 dev eth0    pi@raspberrypi:~ $ ip rule list  0:      from all lookup local  32766:  from all lookup main  32767:  from all lookup default  

zfs checksum errors on Solaris 11 under KVM

Posted: 22 Mar 2021 10:08 PM PDT

Synopsis: libvirt 5.6.0, QEMU 4.1.1, Linux kernel 5.5.10-200, Fedora Server 31.

Solaris 11.4 fresh install (with Solaris 10 branded zones), raw disk on XFS (unfortunately, no possibility to switch to ZFS on Linux and provide a passthrough ZVOL to VM). When I copy a large gzipped file on a ZFS dataset on Solaris VM, zpool get some zfs errors, when I gunzip the file, the gunzipped file becomes corrupted.

Firstly the Solaris VM was hosted on a qcow2 virtual disks, I thought that CoW on CoW is probably the bad idea, so I switched to Raw. Nothing really changed.

Ideas, anyone (I'm acually out of any) ? Solaris 11.4 datasets itself arent't corrupoted. I also successfully run FreeBSD/zfs on a similar setups under KVM (however, using ZVOLs, but still on Linux - no checksum errors there).

Pristine pool:

  pool: oracle   state: ONLINE    scan: scrub repaired 0 in 28s with 0 errors on Mon Mar 22 09:58:30 2021    config:            NAME    STATE      READ WRITE CKSUM          oracle  ONLINE        0     0     0            c3d0  ONLINE        0     0     0    errors: No known data errors  

Copyig file:

[root@s10-zone ~]# cd /opt/oracle/exchange/  [root@s10-zone exchange]# scp oracle@10.31.31.8:/Backup/oracle/expdp/lcomsys.dmp.gz .  Password:   lcomsys.dmp.gz       100% |*********************************************************************| 27341 MB  2:23:09  

Ran a scrub after the copying was finished:

  pool: oracle   state: ONLINE  status: One or more devices has experienced an error resulting in data          corruption.  Applications may be affected.  action: Restore the file in question if possible. Otherwise restore the          entire pool from backup.     see: http://support.oracle.com/msg/ZFS-8000-8A    scan: scrub repaired 6.50K in 5m16s with 3 errors on Tue Mar 23 09:36:34 2021    config:            NAME    STATE      READ WRITE CKSUM          oracle  ONLINE        0     0     3            c3d0  ONLINE        0     0    10    errors: Permanent errors have been detected in the following files:            /system/zones/s10-zone/root/opt/oracle/exchange/lcomsys.dmp.gz  

This is how the solaris virtual disks are attached:

    <disk type='file' device='disk'>        <driver name='qemu' type='raw' cache='none'/>        <source file='/var/vms/disks/solaris11.img'/>        <backingStore/>        <target dev='sda' bus='sata'/>        <address type='drive' controller='0' bus='0' target='0' unit='0'/>      </disk>      <disk type='file' device='cdrom'>        <driver name='qemu' type='raw'/>        <source file='/var/vms/iso/sol-11_4-text-x86.iso'/>        <backingStore/>        <target dev='hda' bus='ide'/>        <readonly/>        <address type='drive' controller='0' bus='0' target='0' unit='0'/>      </disk>      <disk type='file' device='disk'>        <driver name='qemu' type='raw' cache='none'/>        <source file='/var/vms/disks/solaris10-data.img'/>        <backingStore/>        <target dev='hdb' bus='ide'/>        <address type='drive' controller='0' bus='0' target='0' unit='1'/>      </disk>      <disk type='file' device='disk'>        <driver name='qemu' type='raw' cache='none'/>        <source file='/var/vms/disks/solaris11-data.img'/>        <backingStore/>        <target dev='hdc' bus='ide'/>        <address type='drive' controller='0' bus='1' target='0' unit='0'/>      </disk>  

Windows update showing "Some update files are missing or have problems. We will try to download the update again later." on Azure Windows Server VM

Posted: 22 Mar 2021 08:55 PM PDT

Azure Windows Server 2016 Datacenter VM ,version 1607. Showing "Some update files are missing or have problems. We will try to download the update again later." error code:(0x80073712)

How to use server_name with stream NGINX?

Posted: 22 Mar 2021 07:53 PM PDT

Current setup as follows:

stream  {      server {          server_name stream.kingdomgame.org; # this line is resulting in an error          proxy_pass http://localhost:1935;      }  }    

Works just fine without server_name, but I'd like to use a domain if possible.

Azure DevOps Pipeline: Code scanner with notifications

Posted: 22 Mar 2021 07:44 PM PDT

We are currently using the WhiteSource Bolt task in our Azure DevOps pipeline to scan our code for known vulnerabilities. This task will produce a report on the pipeline level, plus there is also a summary report for all vulnerabilities for all pipelines. This summary report can be exported/send via email in different formats, but only from the UI

We would like to get notifications for new vulnerabilities. Let's say that our current code has no vulnerabilities, so we would like to be notified in case the pipeline task finishes and finds some new vulnerabilities. This info can be seen in the UI currently, but there seems to be no option to send notifications (so we are notified automatically vs manually checking the report).

Is anyone aware of any opensource solution for code vulnerability scanning that can integrate with Azure DevOps pipelines and send notifications? WhiteSource Bolt works fine for us, just missing the notifications part (we are aware of the paid version, but that starts at 5k/year and that's too steep as we are a small startup still). Thanks in advance!

Apache Server does not display the correct website

Posted: 22 Mar 2021 09:59 PM PDT

Hi I'm trying to learn apache2 with VPS. So far its ok but I can't the reason for one problem that I face.

I've a website setup in my /var/www/mydomain.com folder. I created a config file for that in /etc/apache2/sites-available/mydomain.com.conf. Now when I go to www.mydomain.com the desired website is displayed. However, if I take the www. in the url and just type mydomain.com I gives me the Apache default page.

mydomain.com.conf

<VirtualHost *:80>            ServerName mydomain.com          ServerAlias www.mydomain.com *mydomain.com          DocumentRoot /var/www/mydomain.com              ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined    </VirtualHost>      

apache2.conf

# This is the main Apache server configuration file.  It contains the  # configuration directives that give the server its instructions.  # See http://httpd.apache.org/docs/2.4/ for detailed information about  # the directives and /usr/share/doc/apache2/README.Debian about Debian specific  # hints.  #  #  # Summary of how the Apache 2 configuration works in Debian:  # The Apache 2 web server configuration in Debian is quite different to  # upstream's suggested way to configure the web server. This is because Debian's  # default Apache2 installation attempts to make adding and removing modules,  # virtual hosts, and extra configuration directives as flexible as possible, in  # order to make automating the changes and administering the server as easy as  # possible.    # It is split into several files forming the configuration hierarchy outlined  # below, all located in the /etc/apache2/ directory:  #  #   /etc/apache2/  #   |-- apache2.conf  #   |   `--  ports.conf  #   |-- mods-enabled  #   |   |-- *.load  #   |   `-- *.conf  #   |-- conf-enabled  #   |   `-- *.conf  #   `-- sites-enabled  #       `-- *.conf  #  #  # * apache2.conf is the main configuration file (this file). It puts the pieces  #   together by including all remaining configuration files when starting up the  #   web server.  #  # * ports.conf is always included from the main configuration file. It is  #   supposed to determine listening ports for incoming connections which can be  #   customized anytime.  #  # * Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/  #   directories contain particular configuration snippets which manage modules,  #   global configuration fragments, or virtual host configurations,  #   respectively.  #  #   They are activated by symlinking available configuration files from their  #   respective *-available/ counterparts. These should be managed by using our  #   helpers a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf. See  #   their respective man pages for detailed information.  #  # * The binary is called apache2. Due to the use of environment variables, in  #   the default configuration, apache2 needs to be started/stopped with  #   /etc/init.d/apache2 or apache2ctl. Calling /usr/bin/apache2 directly will not  #   work with the default configuration.      # Global configuration  #    #  # ServerRoot: The top of the directory tree under which the server's  # configuration, error, and log files are kept.  #  # NOTE!  If you intend to place this on an NFS (or otherwise network)  # mounted filesystem then please read the Mutex documentation (available  # at <URL:http://httpd.apache.org/docs/2.4/mod/core.html#mutex>);  # you will save yourself a lot of trouble.  #  # Do NOT add a slash at the end of the directory path.  #  #ServerRoot "/etc/apache2"    #  # The accept serialization lock file MUST BE STORED ON A LOCAL DISK.  #  #Mutex file:${APACHE_LOCK_DIR} default    #  # The directory where shm and other runtime files will be stored.  #    DefaultRuntimeDir ${APACHE_RUN_DIR}    #  # PidFile: The file in which the server should record its process  # identification number when it starts.  # This needs to be set in /etc/apache2/envvars  #  PidFile ${APACHE_PID_FILE}    #  # Timeout: The number of seconds before receives and sends time out.  #  Timeout 300    #  # KeepAlive: Whether or not to allow persistent connections (more than  # one request per connection). Set to "Off" to deactivate.  #  KeepAlive On    #  # MaxKeepAliveRequests: The maximum number of requests to allow  # during a persistent connection. Set to 0 to allow an unlimited amount.  # We recommend you leave this number high, for maximum performance.  #  MaxKeepAliveRequests 100    #  # KeepAliveTimeout: Number of seconds to wait for the next request from the  # same client on the same connection.  #  KeepAliveTimeout 5      # These need to be set in /etc/apache2/envvars  User ${APACHE_RUN_USER}  Group ${APACHE_RUN_GROUP}    #  # HostnameLookups: Log the names of clients or just their IP addresses  # e.g., www.apache.org (on) or 204.62.129.132 (off).  # The default is off because it'd be overall better for the net if people  # had to knowingly turn this feature on, since enabling it means that  # each client request will result in AT LEAST one lookup request to the  # nameserver.  #  HostnameLookups Off    # ErrorLog: The location of the error log file.  # If you do not specify an ErrorLog directive within a <VirtualHost>  # container, error messages relating to that virtual host will be  # logged here.  If you *do* define an error logfile for a <VirtualHost>  # container, that host's errors will be logged there and not here.  #  ErrorLog ${APACHE_LOG_DIR}/error.log    #  # LogLevel: Control the severity of messages logged to the error_log.  # Available values: trace8, ..., trace1, debug, info, notice, warn,  # error, crit, alert, emerg.  # It is also possible to configure the log level for particular modules, e.g.  # "LogLevel info ssl:warn"  #  LogLevel warn    # Include module configuration:  IncludeOptional mods-enabled/*.load  IncludeOptional mods-enabled/*.conf    # Include list of ports to listen on  Include ports.conf      # Sets the default security model of the Apache2 HTTPD server. It does  # not allow access to the root filesystem outside of /usr/share and /var/www.  # The former is used by web applications packaged in Debian,  # the latter may be used for local directories served by the web server. If  # your system is serving content from a sub-directory in /srv you must allow  # access here, or in any related virtual host.  <Directory />      Options FollowSymLinks      AllowOverride None      Require all denied  </Directory>    <Directory /usr/share>      AllowOverride None      Require all granted  </Directory>    <Directory /var/www/>      Options Indexes FollowSymLinks      AllowOverride None      Require all granted  </Directory>    #<Directory /srv/>  #   Options Indexes FollowSymLinks  #   AllowOverride None  #   Require all granted  #</Directory>          # AccessFileName: The name of the file to look for in each directory  # for additional configuration directives.  See also the AllowOverride  # directive.  #  AccessFileName .htaccess    #  # The following lines prevent .htaccess and .htpasswd files from being  # viewed by Web clients.  #  <FilesMatch "^\.ht">      Require all denied  </FilesMatch>      #  # The following directives define some format nicknames for use with  # a CustomLog directive.  #  # These deviate from the Common Log Format definitions in that they use %O  # (the actual bytes sent including headers) instead of %b (the size of the  # requested file), because the latter makes it impossible to detect partial  # requests.  #  # Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.  # Use mod_remoteip instead.  #  LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined  LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined  LogFormat "%h %l %u %t \"%r\" %>s %O" common  LogFormat "%{Referer}i -> %U" referer  LogFormat "%{User-agent}i" agent    # Include of directories ignores editors' and dpkg's backup files,  # see README.Debian for details.    # Include generic snippets of statements  IncludeOptional conf-enabled/*.conf    # Include the virtual host configurations:  IncludeOptional sites-enabled/*.conf    # vim: syntax=apache ts=4 sw=4 sts=4 sr noet    

I'm not sure what is wrong. Please help !!

Python Flask App on IIS - Updating Active Directory Attributes on Behalf of User

Posted: 22 Mar 2021 07:27 PM PDT

I've posted this on stackoverflow, but posting it here as I want some help from folks that are familiar with kerberos delegation and IIS. I am currently trying to figure out how to get my flask app to handle active directory attribute updates on behalf of users in a domain, such as their phone numbers. I currently have this running in IIS 10 on a Windows Server 2019 VM. I have a small virtual lab on Hyper-V that replicates a vanilla Active Directory domain, where I have a domain controller called dc1, a web server called webserver1, and a client machine called client1.

The web application is ran under a service account named service-acct in IIS. Currently, the HTTP request provides me with the windows auth token of the requesting user (via asp net core module), which allows me to impersonate them.

Web.config:

<?xml version="1.0" encoding="UTF-8"?>  <configuration>      <system.webServer>          <security>              <authentication>                     <anonymousAuthentication enabled="false" />                  <windowsAuthentication enabled="true" useKernelMode="true" useAppPoolCredentials="true" />              </authentication>          </security>           <handlers>              <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" requireAccess="Script" />          </handlers>          <aspNetCore processPath="C:\apps\Test\venv\scripts\python.exe" arguments="app.py" startupTimeLimit="10" stdoutLogEnabled="true" stdoutLogFile=".\logs\log.log" processesPerApplication="10" forwardWindowsAuthToken="true">              <environmentVariables>              </environmentVariables>          </aspNetCore>          <httpErrors errorMode="Detailed" />      </system.webServer>      <system.web>          <identity impersonate="true" />      </system.web>  </configuration>  

The workflow looks something like this:

User hits IIS --> Web Server receives HTTP request --> Flask parses the header and gets the windows authentication token --> continue with endpoint python logic  

In terms of impersonation, I have been able to use the win32security python module to impersonate the user and perform limited operations within the web server (eg. create folders); however, attempting to update the user's active directory credentials through a flask endpoint leads to a permission error (via the pyad python module):

pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'Active Directory', 'An operations error occurred.\r\n', None, 0, -2147217865), None)  

The relevant endpoint python code I used is below:

    @app.route("/TestImpersonation")      def TestImpersonation():          key = 'Ms-Aspnetcore-Winauthtoken'          if key in request.headers.keys():              handle_str = request.headers[key]              handle = int(handle_str, 16) # need to convert from Hex / base 16              win32security.ImpersonateLoggedOnUser(handle)              user = win32api.GetUserName()                from pyad import pyad                user_obj = pyad.from_cn(user)                description = "Changed by "+ str(user) + " on " + datetime.datetime.today().strftime("%Y/%m/%d %H:%M:%S")              user_obj.update_attribute('description', description)                win32security.RevertToSelf() # undo impersonation              win32api.CloseHandle(handle) # don't leak resources, need to close the handle!                        # Continue...  

Searching for the error suggests that I have permission issues trying to do the active directory operation, which makes me think it is a double-hop problem. I tried allowing kerberos delegation for the service account in ADUC and also created SPNs for it as such:

setspn -s HTTP/webSERVER1 contoso\service-acct  setspn -s HTTP/webserver1.contoso.com contoso\service-acct  

However, that seems to still have issues and I seem to be stuck. Any suggestions?

SAMBA 4 Set up on Slackware 14.2

Posted: 22 Mar 2021 07:04 PM PDT

I am trying to set up Samba 4 on Slackware 14.2. I have tried numerous smb.conf files I've found on the Internet, and I cannot access from Windows 10. I want to use smb 4. I do not want to use smb 1 where I browse the network. I am looking for detailed instructions on how to set this up, including smb.conf and any other slackware 14.2 specific settings that may be preventing me from getting this running.

Unable to start virtual machine/domain in KVM: Failed to get "write" lock

Posted: 22 Mar 2021 06:29 PM PDT

After host's restart, I'm not able to start virtual machine:

user@server-1:~$ virsh start docker-1  error: Failed to start domain docker-1  error: internal error: process exited while connecting to monitor: 2021-03-23T01:21:58.149079Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock  Is another process using the image [/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2]?  

File is not in use:

user@server-1:~$ sudo fuser -u /apphd/prod/kvm/storage/docker-1-volume-hd.qcow2  user@server-1:~$ sudo lsof | grep qcow  user@server-1:~$ virsh list   Id   Name   State  --------------------    user@server-1:~$  

I have tried on Ubuntu 18.04/qemu 2.11 and upgraded to Ubuntu 20.04/qemu 4.2.1

This upgrade didn't help to solve an issue.

This VM is very big so can't easily create new one from it, there is no available space.

Any help to recover from this situation and start this domain?

Thank you

Singular Blade Server: A/C required in 5x5 closet?

Posted: 22 Mar 2021 05:18 PM PDT

I have a small home setup that I just play with. Mainly, I have been wanting to move my blade server elsewhere and kind of hide it away. I have several potential places, but I would have to do renovations to help protect those areas from the outside elements.

This brings me to thinking about storing it inside my closet I have. Would the server get too hot with no direct a/c, and the door will be shut, in the closet?

ASRock Rack X570D4U IPMI login failed after password change

Posted: 22 Mar 2021 04:54 PM PDT

I upgraded my servers mainboard to a ASRock Rack X570D4U.
When i opened the IPMI web interface I logged in with admin/admin.

The web interface asked me to change my password. I generated a 24 char long password with letters, numbers and special characters and saved it to my password manager.
After I changed the password, the web interface instructed me to login again with the new password.
But now I can't login with the password. The error message is: Login Failed

Is it possible to reset the password without access to a monitor? I can't find a VGA cable and the HDMI port doesn't work, because the CPU has no iGPU.
There is a "I forgot my password" button, but it says: Unable to reset the Password for the User. Please try again later

Detecting Windows Physical Console Logon

Posted: 22 Mar 2021 06:27 PM PDT

I'm trying to find a way to detect a logon where someone is physically at the machine. I know you can do it with Type 2 but the issue is that event gets logged when services make a logon request like when someone logs on through a service.

One way I found that might be accurate is when Source Network Address shows a local IP like 127.0.0.1

Is this accurate enough or is there another way to do it? Microsoft has the worst documentation and the ways always change and no one standard is followed.

Upgrading Debian Jessie to Stretch

Posted: 22 Mar 2021 04:31 PM PDT

I followed this tutorial but failed.

I got a lot of php not found found errors after update/upgrade commands, but I continued hoping Stretch would replace the php stuff.

now sudo apt-get update gives following erros:

W: There is no public key available for the following key IDs: 112695A0E562B32A W: There is no public key available for the following key IDs: 648ACFD622F3D138 W: Failed to fetch https://packages.sury.org/php/dists/jessie/main/binary-amd64/Packages HttpError404

E: Some index files failed to download. They have been ignored, or old ones used instead.

sudo apt-get update gives following erros:

E: Failed to fetch https://packages.sury.org/php/pool/main/p/pcre3/libpcre3_8.43-1+0~20200703.7+debian8~1.gbpbfc49f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl-doc_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl-dev_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl1.1_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/openssl_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php-defaults/php-common_76+0~20200511.26+debian8~1.gbpc9beb6_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php-pear/php-pear_1.10.8+submodules+notgz-1+0~20190219091008.9+jessie~1.gbp1a209a_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_all.deb HttpError404

E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

sudo apt-get dist-upgrade:

WARNING: The following packages cannot be authenticated! libidn2-0 libpcre3 libssl-doc libssl-dev libssl1.1 openssl libicu65 libxml2 php-common php7.3-intl php7.3-readline php7.3-mysql php7.3-bcmath php7.3-gd php7.3-xml php7.3-opcache php7.3-curl php7.3-json php7.3-cgi php7.3-bz2 php7.3-mbstring php7.3-zip php7.3-cli libapache2-mod-php7.3 php7.3-common php-pear php7.3 Install these packages without verification? [y/N] y Get:1 https://packages.sury.org/php/ jessie/main libidn2-0 amd64 2.2.0-2+0~20200302.4+debian8~1.gbpf85c2e [128 kB] Err https://packages.sury.org/php/ jessie/main libidn2-0 amd64 2.2.0-2+0~20200302.4+debian8~1.gbpf85c2e HttpError404 Get:2 https://packages.sury.org/php/ jessie/main libpcre3 amd64 2:8.43-1+0~20200703.7+debian8~1.gbpbfc49f [339 kB] Err https://packages.sury.org/php/ jessie/main libpcre3 amd64 2:8.43-1+0~20200703.7+debian8~1.gbpbfc49f HttpError404 Get:3 https://packages.sury.org/php/ jessie/main libssl-doc all 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f [1775 kB] Err https://packages.sury.org/php/ jessie/main libssl-doc all 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f HttpError404 Get:4 https://packages.sury.org/php/ jessie/main libssl-dev amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f [1802 kB] Err https://packages.sury.org/php/ jessie/main libssl-dev amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f HttpError404 Get:5 https://packages.sury.org/php/ jessie/main libssl1.1 amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f [1550 kB] Err https://packages.sury.org/php/ jessie/main libssl1.1 amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f HttpError404 Get:6 https://packages.sury.org/php/ jessie/main openssl amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f [834 kB] Err https://packages.sury.org/php/ jessie/main openssl amd64 1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f HttpError404 Get:7 https://packages.sury.org/php/ jessie/main libicu65 amd64 65.1-1+0~20200223.8+debian8~1.gbp519cf3 [8453 kB] Err https://packages.sury.org/php/ jessie/main libicu65 amd64 65.1-1+0~20200223.8+debian8~1.gbp519cf3 HttpError404 Get:8 https://packages.sury.org/php/ jessie/main libxml2 amd64 2.9.9+dfsg-1+0~20200226.5+debian8~1.gbp3b6674 [730 kB] Err https://packages.sury.org/php/ jessie/main libxml2 amd64 2.9.9+dfsg-1+0~20200226.5+debian8~1.gbp3b6674 HttpError404 Get:9 https://packages.sury.org/php/ jessie/main php-common all 2:76+0~20200511.26+debian8~1.gbpc9beb6 [16.0 kB] Err https://packages.sury.org/php/ jessie/main php-common all 2:76+0~20200511.26+debian8~1.gbpc9beb6 HttpError404 Get:10 https://packages.sury.org/php/ jessie/main php7.3-intl amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [124 kB] Err https://packages.sury.org/php/ jessie/main php7.3-intl amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 HttpError404 Get:11 https://packages.sury.org/php/ jessie/main php7.3-readline amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [12.2 kB] Err https://packages.sury.org/php/ jessie/main php7.3-readline amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 HttpError404 Get:12 https://packages.sury.org/php/ jessie/main php7.3-mysql amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [118 kB] Err https://packages.sury.org/php/ jessie/main php7.3-mysql amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 HttpError404 Get:13 https://packages.sury.org/php/ jessie/main php7.3-bcmath amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [15.2 kB] Err https://packages.sury.org/php/ jessie/main php7.3-bcmath amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 HttpError404 Get:14 https://packages.sury.org/php/ jessie/main php7.3-gd amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [27.4 kB] Err https://packages.sury.org/php/ jessie/main php7.3-gd amd64 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1

Get:27 https://packages.sury.org/php/ jessie/main php7.3 all 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 [44.1 kB] Err https://packages.sury.org/php/ jessie/main php7.3 all 7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1 HttpError404 E: Failed to fetch https://packages.sury.org/php/pool/main/libi/libidn2/libidn2-0_2.2.0-2+0~20200302.4+debian8~1.gbpf85c2e_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/pcre3/libpcre3_8.43-1+0~20200703.7+debian8~1.gbpbfc49f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl-doc_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl-dev_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/libssl1.1_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/o/openssl/openssl_1.1.1g-1+0~20200421.17+debian8~1.gbpf6902f_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/i/icu/libicu65_65.1-1+0~20200223.8+debian8~1.gbp519cf3_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/libx/libxml2/libxml2_2.9.9+dfsg-1+0~20200226.5+debian8~1.gbp3b6674_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php-defaults/php-common_76+0~20200511.26+debian8~1.gbpc9beb6_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-intl_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-readline_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-mysql_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-bcmath_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-gd_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-xml_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-opcache_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-curl_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-json_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-cgi_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-bz2_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-mbstring_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-zip_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-cli_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/libapache2-mod-php7.3_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3-common_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_amd64.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php-pear/php-pear_1.10.8+submodules+notgz-1+0~20190219091008.9+jessie~1.gbp1a209a_all.deb HttpError404

E: Failed to fetch https://packages.sury.org/php/pool/main/p/php7.3/php7.3_7.3.19-1+0~20200612.60+debian8~1.gbp6c8fe1_all.deb HttpError404

cat sources.list

#deb http://debian.mirrors.ovh.net/debian/ jessie main #deb-src http://debian.mirrors.ovh.net/debian/ jessie main

#deb http://security.debian.org/ jessie/updates main #deb-src http://security.debian.org/ jessie/updates main

#jessie-updates, previously known as 'volatile' #deb http://debian.mirrors.ovh.net/debian/ jessie-updates main #deb-src http://debian.mirrors.ovh.net/debian/ jessie-updates main

#jessie-backports, previously on backports.debian.org #deb http://debian.mirrors.ovh.net/debian/ jessie-backports main #deb-src http://debian.mirrors.ovh.net/debian/ jessie-backports main

#deb http://debian.mirrors.ovh.net/debian/ jessie main contrib non-free #deb-src http://debian.mirrors.ovh.net/debian/ jessie main contrib non-free #deb http://software.virtualmin.com/gpl/debian/ virtualmin-jessie main #deb http://software.virtualmin.com/gpl/debian/ virtualmin-universal main

#ruti #deb http://ftp.us.debian.org/debian jessie-backports main contrib non-free #ruti2 #deb ftp://ftp.us.debian.org/debian/ wheezy non-free #deb http://security.debian.org/ wheezy/updates non-free

#deb http://deb.debian.org/debian/ jessie main #deb http://security.debian.org/ jessie/updates main

#Only these are uncommented below deb http://deb.debian.org/debian/ stretch main deb http://security.debian.org/debian-security stretch/updates main deb http://deb.debian.org/debian/ stretch-updates main

cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 8 (jessie)" NAME="Debian GNU/Linux" VERSION_ID="8" VERSION="8 (jessie)" ID=debian HOME_URL="http://www.debian.org/" SUPPORT_URL="http://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"

My goal is upgrading Debian 8 to 10, but first step is upgrading to 9. I would be glad if you can help me!

Gmail failing to accept TLS

Posted: 22 Mar 2021 08:52 PM PDT

I recently set up a postfix mail server. Testing it with other domains, everything seems to work well.

However, when my server tries to send messages to gmail, they are marked as spam, with the red padlock and note rr.com did not encrypt this message

(rr.com is not my domain. However, the above is exactly what gmail says)

After forcing tls, I find that my server is unable to send messages to gmail at all. Logs state that (TLS is required, but was not offered by host alt4.gmail-smtp-in.l.google.com[142.250.96.26])

Wait, what? Gmail certainly offers TLS!

What's happening here?

postconf -n

alias_database = $alias_maps  alias_maps = hash:/etc/aliases  append_dot_mydomain = no  biff = no  bounce_template_file = /etc/postfix/bounce.cf  broken_sasl_auth_clients = yes  canonical_maps = hash:/etc/postfix/maps/canonical  command_directory = /usr/bin  compatibility_level = 2  daemon_directory = /usr/lib/postfix/bin  data_directory = /var/lib/postfix  default_destination_concurrency_limit = 5  disable_vrfy_command = yes  dovecot_destination_recipient_limit = 1  home_mailbox = Maildir/  inet_interfaces = all  inet_protocols = ipv4  local_destination_concurrency_limit = 2  mail_owner = postfix  mailbox_command = /usr/lib/dovecot/deliver -m "${EXTENSION}"  mailbox_size_limit = 0  message_size_limit = 104857600  mydestination = $myhostname  mydomain = example.com  myhostname = mail.example.com  mynetworks = 127.0.0.0/8, 10.0.0.0/8  myorigin = $myhostname  queue_directory = /var/spool/postfix  readme_directory = no  recipient_delimiter = +  relay_destination_concurrency_limit = 1  smtp_tls_CAfile = /etc/ssl/cert.pem  smtp_tls_note_starttls_offer = yes  smtp_tls_security_level = may  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache  smtp_tls_verify_cert_match = hostname, nexthop, dot-nexthop  smtp_use_tls = yes  smtpd_banner = $myhostname ESMTP $mail_name  smtpd_helo_required = yes  smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_helo_hostname, reject_invalid_helo_hostname, reject_unknown_helo_hostname, permit  smtpd_recipient_restrictions = permit_mynetworks, reject_unknown_client_hostname, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender  smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, defer_unauth_destination  smtpd_sasl_auth_enable = yes  smtpd_sasl_authenticated_header = yes  smtpd_sasl_local_domain = $myhostname  smtpd_sasl_path = private/dovecot-auth  smtpd_sasl_security_options = noanonymous  smtpd_sasl_type = dovecot  smtpd_sender_login_maps = $virtual_mailbox_maps  smtpd_sender_restrictions = permit_mynetworks, reject_unknown_sender_domain, reject_sender_login_mismatch,  smtpd_tls_CAfile = /etc/ssl/cert.pem  smtpd_tls_ask_ccert = yes  smtpd_tls_cert_file = /etc/letsencrypt/live/example.com/fullchain.pem  smtpd_tls_ciphers = high  smtpd_tls_key_file = /etc/letsencrypt/live/example.com/privkey.pem  smtpd_tls_loglevel = 1  smtpd_tls_protocols = !TLSv1 !SSLv2 !SSLv3  smtpd_tls_security_level = may  smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache  smtpd_tls_session_cache_timeout = 3600s  smtpd_use_tls = yes  unknown_address_reject_code = 550  unknown_client_reject_code = 550  unknown_hostname_reject_code = 550  unknown_local_recipient_reject_code = 550  virtual_alias_maps = hash:/etc/postfix/maps/valiases  virtual_mailbox_domains = hash:/etc/postfix/maps/vmailbox-domains  virtual_mailbox_maps = hash:/etc/postfix/maps/vmailbox-users  virtual_transport = dovecot  

Can I extend the session length for my Service Account?

Posted: 22 Mar 2021 05:13 PM PDT

I've created a Service Account so that Appfigures could connect and get data.

The issue is that we have to verify our Google Play account every week in Appfigures and they have told if we can extend the session length to avoid having to relink every so often and the usual default set up is 14 days and if we know what our limitation is set up with at this time.

I've reviewed the console, I've looked in the help but I haven't found anything about this.

Could you help me, please?

Debian Stuck at Booting from Hard Disk after installation on KVM

Posted: 22 Mar 2021 05:02 PM PDT

I'm trying to install Debian on Qemu-KVM on RouterOS v5.25

I tried :

debian-10.8.0-i386-netinst.iso  debian-8.11.0-i386-kde-CD-1.iso  debian-live-9.0.0-i386-gnome.iso  

on disk image created using : qemu-img.exe create -f raw debian.img 10G

during the installation process, everything is good to the end.

but right after finishing installation and rebooting, it shows the boot screen counting to 4 then it's stuck at

Booting from Hard Disk...

GRUB installed on (master boot record) during installation and i've tried to install it on (/dev/sda).

enter image description here

Note : it's not rebooting itself, it's just stuck.

Here's the boot parameters :

enter image description here

RouterOS KVM configuration :

enter image description here

(for testing) I installed debian-6.0.10-i386-netinst.iso it did install and boot without problems, but nothing higher than this version is booting.

What causes this problem?

How to detect NVIDIA GPU with Puppet

Posted: 22 Mar 2021 06:38 PM PDT

I have some tasks I only want to run on machines that have NVIDIA GPUs. Is there a good way with Puppet to be able to determine if a specific agent has an NVIDIA GPU or not? I'm able to do it in bash by checking to see if /usr/bin/nvidia-smi exists, but I'm not sure how I should do this in Puppet. Also if there's a better way to do it in bash instead of this way, please let me know.

Odoo 10 CE performance tuning - enabling workers

Posted: 22 Mar 2021 08:01 PM PDT

I will try explain the best I can the problem I have with Odoo 10 CE running on Ubuntu 16.04 LTS VM placed on on-premise HP Proliant G6 running hyper-v.

Physical server specs:

  • Processor Intel(R) Xeon(R) CPUX5560 @ 2.80GHz, 2800 Mhz, 4 Core(s), 8 Logical Processor(s)
  • OS Name Microsoft Windows Server 2012 R2 Datacenter
  • Installed Physical Memory (RAM) 16,0 GB
  • 5x 10k SAS drives raid 1+0 (one hot spare)

Ubuntu VM specs:

carlo@enecom:~$ lscpu  Architecture:          x86_64  CPU op-mode(s):        32-bit, 64-bit  Byte Order:            Little Endian  CPU(s):                8  On-line CPU(s) list:   0-7  Thread(s) per core:    1  Core(s) per socket:    8  Socket(s):             1  NUMA node(s):          1  Vendor ID:             GenuineIntel  CPU family:            6  Model:                 26  Model name:            Intel(R) Xeon(R) CPU           X5560  @ 2.80GHz  Stepping:              5  CPU MHz:               2762.494  BogoMIPS:              5524.98  Hypervisor vendor:     Microsoft  Virtualization type:   full  L1d cache:             32K  L1i cache:             32K  L2 cache:              256K  L3 cache:              8192K  NUMA node0 CPU(s):     0-7  Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm kaiser  carlo@enecom:~$  

4Gb RAM machine

carlo@enecom:~$ free -m                total        used        free      shared  buff/cache   available  Mem:           3164         291        1984         113         889        2684  Swap:          4091           0        4091  

I would like to enable workers to get better performance. There are only few users using the odoo instance.

Can someone please help me with error which I get every time when I change workers to value > 0 ?

2018-01-13 11:23:36,666 9225 ERROR ENECOM odoo.http: Exception during JSON request handling.  Traceback (most recent call last):    File "/odoo/odoo-server/odoo/http.py", line 640, in _handle_exception      return super(JsonRequest, self)._handle_exception(exception)    File "/odoo/odoo-server/odoo/http.py", line 677, in dispatch      result = self._call_function(**self.params)    File "/odoo/odoo-server/odoo/http.py", line 333, in _call_function      return checked_call(self.db, *args, **kwargs)    File "/odoo/odoo-server/odoo/service/model.py", line 101, in wrapper      return f(dbname, *args, **kwargs)    File "/odoo/odoo-server/odoo/http.py", line 326, in checked_call      result = self.endpoint(*a, **kw)    File "/odoo/odoo-server/odoo/http.py", line 935, in __call__      return self.method(*args, **kw)    File "/odoo/odoo-server/odoo/http.py", line 506, in response_wrap      response = f(*args, **kw)    File "/odoo/odoo-server/addons/bus/controllers/main.py", line 35, in poll      raise Exception("bus.Bus unavailable")  Exception: bus.Bus unavailable  

This is my odoo-server.conf (part of it)

limit_memory_hard = 2147483648  limit_memory_soft = 1572864000  limit_request = 8192  limit_time_cpu = 600  limit_time_real = 1200  limit_time_real_cron = -1  workers = 9  xmlrpc = True  xmlrpc_interface =  xmlrpc_port = 8069  longpolling_port = 8072  max_cron_threads = 2  

Can you please help me with this error?
Also I'm running Odoo on apache2 with reverse proxy.
Maybe my values in conf file are not correct?

can't exclude directories on backuppc

Posted: 22 Mar 2021 07:03 PM PDT

i am tring to configure BackupPc to EXCLUDE some directories on my backups.

I need to exclude many (very big) caches directories from backups on different machines

can you help me?

this is my configuration but doen't work, navigationg on the backups i always have /wp-content/cache/ directory filled of GB of files

$Conf{BackupFilesExclude} = {    '*' => [      '/*/wp-content/cache/',      '/wp-content/cache/*',      './wp-content/cache/',      'wp-content/cache/',      'wp-content/cache/*'    ]  };  

i tried many values on $Conf{BackupFilesExclude} without success

thanks

Change linux multicast interface

Posted: 22 Mar 2021 05:04 PM PDT

Why my multicast traffic always goes through wlan0 interface?

I try

ip route add 224.0.0.0/4 dev lo  ip link set dev lo multicast on  ip route flush cache  

But VLC casting to 224.0.0.1:1111 always goes through wlan0 interface.

ip route:

default via 192.168.0.1 dev wlan0  proto static  metric 600   192.168.0.0/24 dev wlan0  proto kernel  scope link  src 192.168.0.102  metric 600   224.0.0.0/4 dev lo  scope link   

Group Policy for Microsoft Edge extensions

Posted: 22 Mar 2021 10:05 PM PDT

How can I configure my group policy, to forcefully install a specific extension to MS Edge on client machines?

Now that Edge is starting to support more extensions, and better ones, this is becoming a valid option for secure browsing. If I want to allow my users to use Edge, I would expect them all to have required security extensions - e.g. ad blocker (specifically I like uBlock origin, but there are others), and any other required extension...

How to connect to OpenVPN clients from LAN 'members'

Posted: 22 Mar 2021 09:03 PM PDT

Working on a IoT type of thing, I want to connect some devices "in the wild" to servers in AWS through OpenVPN on an EC2 instance.

So far I have been able to set up an EC2 instance configured as an OpenVPN server, and I have the client devices connecting to the VPN successfully. This was all set up using this guide - https://www.digitalocean.com/community/tutorials/how-to-setup-and-configure-an-openvpn-server-on-centos-6

The OpenVPN clients are getting 10.8.0.x IPs and can talk to each other via those IPs. I can also talk to these IPs from the OpenVPN server itself. So far so good.

I also have other EC2 instances on AWS, in the same VPC and subnet as the OpenVPN server. These instances cannot currently reach the OpenVPN clients via their 10.8.0.x IP. The OpenVPN clients can reach the instances by their private subnet IPs (10.101.x.x), but they represent themselves only with the IP address of the OpenVPN server.

How do I need to do, to:

A. Enable the EC2 instances to send messages to individual OpenVPN client devices (probably via their OpenVPN addresses, but other options are welcome).

B. Let the EC2 instances see the origin IP addresses of the clients rather than just the server's IP, when they send messages to the server. -- This is secondary, really, as the clients would identify themselves in their requests.

Edit

Devices are in distinct geographical locations and not on a common LAN, each connecting via 3G/4G. Each device needs to send messages to all the EC2 instances, and each EC2 instances needs to send messages to some of the devices.

            /- AWS VPC & public subnet ----------------\              |                                          |  deviceA ----+-\                       /-- ec2_A        |  10.8.0.a    | |                       |   10.101.0.a   |  _______     | >- OpenVPN server ------<                |              | |  10.8.0.1 / 10.101.0.x |               |  deviceB ----+-/                       \-- ec2_B        |  10.8.0.b    |                             10.101.0.b   |              \------------------------------------------/  

how to use varnish cache with set-cookie named mp3list and phpsessionid

Posted: 22 Mar 2021 06:07 PM PDT

I am new to php and i am interested to use varnish to improve site performance.

I installed the latest version of varnish: 4.0.2 varnish

HTTP/1.1 200 OK  Date: Sat, 06 Dec 2014 07:24:47 GMT  Server: Apache/2.2.29 (Unix) mod_ssl/2.2.29 OpenSSL/1.0.1e-fips mod_bwlimited/1.4  X-Powered-By: PHP/5.4.34  Expires: Thu, 19 Nov 1981 08:52:00 GMT  Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0  Pragma: no-cache  Set-Cookie: PHPSESSID=86dc704d405a80d0c012de043cd9408b; path=/  Set-Cookie: Mp3List=WDVMG2G4; expires=Tue, 03-Dec-2024 07:24:47 GMT  Vary: Accept-Encoding  Content-Type: text/html  X-Varnish: 2  Age: 0  Via: 1.1 varnish-v4  Connection: keep-alive  

I use a cookie named (mp3list and phpsessionid) so I can't cache my pages. I used vcl with:

$sub vcl_recv {    call normalize_req_url;  set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");  # Happens before we check if we have this in cache already.  #  # Typically you clean up the request here, removing cookies you don't need,  # rewriting the request, etc.  # If the requested URL starts like "/cart.asp" then immediately pass it to the given  # backend and DO NOT cache the result ("pass" basically means "bypass the cache").  if (req.url ~ "^/playlist\.php$") {     return (pass);     }  }  sub vcl_backend_response {   # Happens after we have read the response headers from the backend.   #   # Here you clean the response headers, removing silly Set-Cookie headers   # and other mistakes your backend does.   if (beresp.http.Set-Cookie)   {    set beresp.http.Set-Cookie = regsub(beresp.http.Set-Cookie, "^php","");    if (beresp.http.Set-Cookie == "")      {          unset beresp.http.Set-Cookie;      }    }   unset beresp.http.Cache-Control;   set beresp.http.Cache-Control = "public";   }   sub vcl_deliver {   # Happens when we have all the pieces we need, and are about to send the   # response to the client.   #   # You can do accounting or modifying the final object here.   # Was a HIT or a MISS?     if ( obj.hits > 0 )   {    set resp.http.X-Cache = "HIT";   }   else   {    set resp.http.X-Cache = "MISS";   }    # And add the number of hits in the header:    set resp.http.X-Cache-Hits = obj.hits;   }   sub normalize_req_url {   # Strip out Google Analytics campaign variables. They are only needed   # by the javascript running on the page   # utm_source, utm_medium, utm_campaign, gclid, ...   if(req.url ~ "(\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|utm_[a-z]+|mr:[A-z]+)=") {      set req.url = regsuball(req.url, "(gclid|cx|ie|cof|siteurl|zanpid|origin|utm_[a-z]+|mr:[A-               z]+)=[%.-_A-z0-9]+&?", "");    }    set req.url = regsub(req.url, "(\?&?)$", "");    }    sub vcl_hash {    hash_data(req.url);    if (req.http.host) {    hash_data(req.http.host);    } else {    hash_data(server.ip);    }    hash_data(req.http.Cookie);    }    

Now I removed mp3list using this, still phpsessid is there, what can I do to remove phpsessid?

HTTP/1.1 200 OK  Date: Sat, 06 Dec 2014 07:44:46 GMT  Server: Apache/2.2.29 (Unix) mod_ssl/2.2.29 OpenSSL/1.0.1e-fips mod_bwlimited/1.4  X-Powered-By: PHP/5.4.34  Expires: Thu, 19 Nov 1981 08:52:00 GMT  Pragma: no-cache  Vary: Accept-Encoding  Content-Type: text/html  Set-Cookie: PHPSESSID=ce934af9a97bd7d0fd14304bd49f8fe2; path=/  Cache-Control: public  X-Varnish: 163843  Age: 0  Via: 1.1 varnish-v4   X-Cache: MISS  X-Cache-Hits: 0  Connection: keep-alive  

Does someone know how to bypass phpsessid and edit VCL, to use varnish effectively?

Increase number of connections per socket

Posted: 22 Mar 2021 05:04 PM PDT

How do I increase the number of connections each socket can accept?

I know that you can increase the total number of connections using:

# sysctl kern.ipc.somaxconn=4096  

But each connection is then still limited to 128.

How do I increase the total connections per socket?

MySQL Cluster SQL Node not synchronizing

Posted: 22 Mar 2021 08:01 PM PDT

I am new to MySQL Cluster and am trying to setup a new cluster for our new application. Here is what I have set up on 5 CentOS 64 bit VM and got the cluster to work using MySQL Cluster 7.2. I am trying to test it and have some issues.

"I have successfully installed the Cluster with 5 nodes (2 Data, 1 Mgmt and 2 SQL Nodes). While testing the cluster I have hit on one scenario where I am stuck and cannot make it to work. Here is the screen shot of the Management Node displaying all cluster nodes:

ndb_mgm> show

Cluster Configuration

[ndbd(NDB)] 2 node(s) id=2 @10.0.3.138 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0) id=3 @10.0.3.83 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.3.135 (mysql-5.5.30 ndb-7.2.12)

[mysqld(API)] 2 node(s) id=4 @10.0.3.87 (mysql-5.5.30 ndb-7.2.12) id=5 @10.0.3.22 (mysql-5.5.30 ndb-7.2.12)

ndb_mgm> Here is the scenario:

While all nodes in the cluster are working as a part of my test I shut down one of the SQL Node 4 and while this node is offline, I drop a database which is part of the cluster databases on SQL Node 5. When I bring up the offline SQL Node 4 and rejoin to the cluster the dropped database still shows up. It should sync with the old cluster databases meaning the dropped database when the SQL Node 4 was offline should be removed from the cluster and should not show up on SQL Node 4. This is a real scenario that can happen.

Also, I am searching for MySQL Cluster Test document which describes these scenarios and cannot seem to find it. "

Any Help will be greatly appreciated.

Thanks

Clustered file systems as XenServer storage

Posted: 22 Mar 2021 06:07 PM PDT

I want to use shared storage for a XenServer environment with 4 host servers which are running various VMs under XenServer. Planning to use 2 extra servers as storage with high availability of some sort.

While the most obvious solution is iSCSI SAN software, I see some recommendations that one skip iSCSI altogether and go for clustered file systems - most prominent seem GFS2 and Lustre.

However, I don't see options in XenServer which support connecting to such clustered systems.

First of all, do I need to then make the 4 XenServer hosts also part of the cluster? As I am installing via the Citrix download, I am not familiar how I would go about it, or if I even need to.

Don't need more than 30 VMs and storage is limited below 4 TB. Under these circumstances, what cluster type is best? Or does this not work at all with Xen?

Should I disable write caching on my Windows 2008 VM?

Posted: 22 Mar 2021 06:28 PM PDT

I have a Windows Server 2008 x64 Standard virtual machine that runs on a machine with a hardware RAID controller, a Perc 6/i, which has a battery on-board.

Doing everything I can for additional performance, I think I should disable this. Is this very dangerous though?

My understand is that Battery Backed Write Caching gives a performance boost to the host OS, telling it the write was complete when they are still sitting in flash waiting to be written.

However, I can't see how it would be detrimental to performance, but is there a gain (even if marginal) to enabling it / disabling it?

P.s. There machine has a backup power.

Here is a screen shot for clarification:

screenshot

linux - disabling and enabling link-local

Posted: 22 Mar 2021 10:05 PM PDT

I'm trying to find out how to disable and enable link local addresses, on my linux machine(also on arm). So basically for IPv4 and IPv6 I would like to either disable (or bring down) both addresses together or even individually if needs be. Then to enable both again. I would also like to check if they have been disabled and enabled each time.

Is it possible to execute system command line scripts to achieve this, for instance

*ip -f inet route*  or  *ip -f inet6 route*  

Is it possible to do this without restarting the network?

Also, I have, using the 2 examples above, obtained both IPv4 and IPv6 addresses. For example,

ip -f inet route | grep \"dev eth0\" | cut -d' ' -f1  ip -f inet route | grep \"dev eth0\" | cut -d' ' -f1  

but I am concerned that the grep string is not unique enough to search for the line(s) where the address(es) are. Is there a better way to do this?

Thanks.

How do i stop Squid proxy from corrupting jar-files?

Posted: 22 Mar 2021 07:03 PM PDT

Our internal corporate NTLM proxy (Also Squid i think) randomly returns 407 errors for some reason, and it's pointless to even try to get someone to fix that.

I have on my Windows computer an installation of Cntlm proxy on port 3128 to be able to use non-NTLM-aware software. However, i still randomly get 407 errors from the corporate proxy.

To work around this, i setup a Squid Cache (Version 2.7.STABLE8) proxy on localhost forwarding to Cntlm, thinking i could have it retry on error.

I use the following configuration:

cache_dir ufs c:/ws/tmp/squidcache 500 16 256  http_port 3127  cache_peer 127.0.0.1 parent 3128 0 no-query default  acl all src 127.0.0.1  never_direct allow all  http_access allow all  retry_on_error on  maximum_single_addr_tries 10  maximum_object_size 100 MB  

It mostly works, but the problem is that jar-files end up slightly corrupted. I haven't figured out exactly how they are corrupted, but they are generally a couple of bytes longer than they should be, and even bytes in the beginning of the files are corrupted. And it's different each time.

I found http://javatechniques.com/blog/squid-corrupts-jar-files/ which suggests it might be a problem with mime type configuration and Squid treating jar-files as ASCII, but does not tell you how to fix it in Squid.

I tried adding

\.jar$      application/octet-stream        anthony-compressed.gif  -   image   +download  # the default  .       application/octet-stream    anthony-unknown.gif -   image   +download  

to Squids mime.conf, and clearing the cache, but that didn't help. I didn't really expect it to help since i think those are only used for proxying FTP.

Setting the document root in godaddy virtual server

Posted: 22 Mar 2021 09:03 PM PDT

HI, I am not familiar with setting up servers. I want to set up the document root to point to another directory. This is not mod rewrite for URL's that I am trying to do.

Is it a httpd.conf file or a .htaccess that I need to create? and what is the code to do it??

No comments:

Post a Comment