Saturday, September 11, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Not permitted to view my newly installed phpMyAdmin

Posted: 11 Sep 2021 09:56 PM PDT

I'm using Google Compute Engine to host a couple applications under one .dev domain so SSL is required.

I have Apache installed based from this guide and my /var/www/ directory looks like:

- domain.dev/ (currently in use - WordPress site)    - html/      - wp-content/      - index.php      - ...    - log/  - database.domain.dev/ (currently unused - I want to access phpMyAdmin by going to this URL)    - html/    - log/  - subdomain.domain.dev/ (currently in use - a separate project but still under the same domain)    - html/      - css/      - scripts/      - index.php      - ...    - log/  

Right now I can visit these three URLS and they work, except of course database.domain.dev - it just gives me the default page that shows Apache is working. I'm trying to install phpMyAdmin on this subdomain but it's not working.

I already have MySQL installed on this server - it's what WordPress is using. I plan to add another database and another user to it that's why I'm trying to install phpMyAdmin as it's easier to manage from there.

SSL is already working since I can see the page that shows Apache is working when I visit the page. The DNS settings have been taken care of from GCP's Cloud DNS.

On my /etc/httpd/sites-available/database.domain.dev.conf, I have this:

<VirtualHost *:80>      ServerName www.database.domain.dev      ServerAlias database.domain.dev      DocumentRoot /var/www/database.domain.dev/html      ErrorLog /var/www/database.domain.dev/log/error.log      CustomLog /var/www/database.domain.dev/log/requests.log combined      RewriteEngine on      RewriteCond %{SERVER_NAME} =www.database.domain.dev [OR]      RewriteCond %{SERVER_NAME} =database.domain.dev      RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]  </VirtualHost>  

On my /etc/httpd/conf.d/phpMyAdmin.conf, I have this:

Alias /manage /usr/share/phpMyAdmin    ...    <VirtualHost *:80>      ServerAdmin support@domain.dev      DocumentRoot /var/www/database.domain.dev/html      ServerName database.domain.dev  </VirtualHost>  

When I visited https://database.domain.dev/manage I expected to see phpMyAdmin pop up but I got an error that said I'm not permitted to view the page. When I tried https://database.domain.dev/bogus it said, the URL can't be found. So that gives me an idea that the alias is working but I don't know why I don't have access to view the page.

Why can I change the the reserved blocks on a read only mounted ext4 filesystem?

Posted: 11 Sep 2021 10:17 PM PDT

I would have expected an error, sorry FS is read-only, but it is possible. This is unexpected & counter intuitive is there a reason?

Linux files 5.11.0-27-generic #29~20.04.1-Ubuntu SMP Wed Aug 11 15:58:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux  files@files:/mnt/disk$ mount | grep /mnt/disk/005  /dev/sdh on /mnt/disk/005 type ext4 (ro,nosuid,nodev,noexec,relatime,user)  files@files:/mnt/disk$ df /mnt/disk/005/  Filesystem      1K-blocks       Used Available Use% Mounted on  /dev/sdh       7751367424 7332824836  27824876 100% /mnt/disk/005  files@files:/mnt/disk$ sudo tune2fs -r 0 /dev/sdh  tune2fs 1.45.5 (07-Jan-2020)  Setting reserved blocks count to 0  files@files:/mnt/disk$ df /mnt/disk/005/  Filesystem      1K-blocks       Used Available Use% Mounted on  /dev/sdh       7751367424 7332824836 418526204  95% /mnt/disk/005  files@files:/mnt/disk$  

Tenable su+sudo and selinux

Posted: 11 Sep 2021 08:08 PM PDT

My Not-A-Sysadmin-Boss wants me to explain this but I can't really find an answer? When using TENABLE SC to scan a RHEL7 system the account used to do the scan connects via ssh then uses sudo to perform its checks. But when selinux is enforcing some checks cannot be performed, one such check does a cat of /etc/passwd but is denied when selinux is enforcing. The work around is to configure SC to use su+sudo for the account connecting. First SC makes an ssh connection with a unprivileged account then does an su to a user with sudo rights that can run the checks and now they work. So basically I am trying to understand why logging in directly with a sudo user to run certain checks fails with selinux enforcing but logging in then doing a su to a sudo user can. Tenable's articles on this don't really cover the selinux aspect of this.

Use dnsmasq as DNS for OpenVPN

Posted: 11 Sep 2021 06:18 PM PDT

So to the best of my understanding, I have all the pieces, namely, DNSmasq and OpenVPN working fine, although independently. What I've done?

  • Installed OpenVPN using this: https://github.com/Nyr/openvpn-install
    • After install, I'm able to connect client (PC, phone), works. No issue.
  • I've installed DNSmasq and also appears to be running and working as expected
    • I've blocked one or two sites i.e pointed them to 0.0.0.0 in the /etc/hosts file, and when I do nslookup thatdomain.com, I get the 0.0.0.0 response

This is currently my /etc/openvpn/server/server.conf

local 124.120.60.254  port 1194  proto udp  dev tun  ca ca.crt  cert server.crt  key server.key  dh dh.pem  auth SHA512  tls-crypt tc.key  topology subnet  server 10.8.0.0 255.255.255.0  push "redirect-gateway def1 bypass-dhcp"  ifconfig-pool-persist ipp.txt  push "dhcp-option DNS 94.140.14.14"  push "dhcp-option DNS 94.140.15.15"  keepalive 10 120  cipher AES-256-CBC  user nobody  group nogroup  persist-key  persist-tun  verb 3  crl-verify crl.pem  explicit-exit-notify  duplicate-cn  

And the only change I've made to the default /etc/dnsmasq.conf file is uncommenting and indicating the interface this line:

interface=tun0

Where I need help?

How to make OpenVPN use DNSmasq for all DNS requests. I just can't seem to find a definite answer on how to achieve that, which of the files to change, and what to add.

Am I missing any steps?

Which directories get affected by "Linux PCI-DSS patch updates" and "Kernel update"

Posted: 11 Sep 2021 05:04 PM PDT

To achieve the PCI-DSS compliance, Company should apply all OS patching in monthly bases.

However, these patches affect the File Integrity monitoring, for example

/etc/bin /etc/include and much more directories.

My question is, how to know which directories get affected by a specific Patch?

Thanks in advance.

The code that works on localhost does not work on Directadmin

Posted: 11 Sep 2021 04:58 PM PDT

I looked in sql but I couldn't find an error I can't connect with I will be glad if you help

<?php            $host       = "localhost";      $dbname     = "admin1_user";      $charset    = "utf8";      $root       = "admin1_root";      $password   = "";        try{          $db = new PDO("mysql:host=$host;dbname=$dbname;charset=$charset;", $root, $password);      }catch(PDOExeption $error){          echo $error->getMessage();      }        // CSRF Token      if ($_SESSION) {        if (!isset($_POST["_token"])) {          $_SESSION["_token"] = md5(time().rand(0,99999999));        }      }  ?>  

Update From Debian 10 to Debian 11 Gone Wrong

Posted: 11 Sep 2021 07:56 PM PDT

I just upgrade from Debian 10 to Debian 11 using these instructions. Everything seems to have worked smoothly, except maldet is failing.

This is the the error:

maldet[2117]: maldet(2117): {mon} kernel does not support inotify(), aborting  systemd[1]: maldet.service: Can't open PID file /usr/local/maldetect/tmp/inotifywait.pid (yet?) after start: Operation not permitted   systemd[1]: maldet.service: Failed with result 'protocol'.  systemd[1]: Failed to start Linux Malware Detect monitoring - maldet.  

My /usr/lib/systemd/system/maldet.service file contains:

[Unit]  Description=Linux Malware Detect monitoring - maldet  After=network.target    [Service]  EnvironmentFile=/usr/local/maldetect/conf.maldet  ExecStart=/usr/local/maldetect/maldet --monitor USERS  ExecStop=/usr/local/maldetect/maldet --kill-monitor  Type=forking  PIDFile=/usr/local/maldetect/tmp/inotifywait.pid  [Install]  WantedBy=multi-user.target  

prior to my update, I verified all services were working properly and during the update chose "N" no, declined to replace my custom config files... so nothing should have changed.

Also, I am using Linux 5.10.0-8-amd64 & maldet 1.6.4

Can someone help me figure this out? thanks

Why am I unable to access my website after installing an SSL certificate using Certbot? (running Ubuntu and Nginx)

Posted: 11 Sep 2021 03:52 PM PDT

I can not establish a connection to port 443 on my nginx server.

I needed port 443 to enable https connections, I used certbot to install an ssl certificate, and I went with the default installation, and default instructions in this guide

Even though I've set the 'nginx full' command to open both http and https ports, I double checked to make sure that port 443 is really open by running the following command sudo lsof -i -P -n | grep LISTEN, and in the response, I got port 443 as being used by nginx

I tried tools like cURL to test my ports, port 80 works just fine, but I get no response from port 443

I lack experience with server administration and I tried to check other resources but I don't know what else to do.

my sites-available configuration:

server {            root /var/www/muhammed-aldulaimi.com/html;          index index.html index.htm index.nginx-debian.html;            server_name muhammed-aldulaimi.com www.muhammed-aldulaimi.com;            location / {                  try_files $uri $uri/ =404;          }        listen [::]:443 ssl ipv6only=on; # managed by Certbot      listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/muhammed-aldulaimi.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/muhammed-aldulaimi.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot      }  server {      if ($host = www.muhammed-aldulaimi.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          if ($host = muhammed-aldulaimi.com) {          return 301 https://$host$request_uri;      } # managed by Certbot              listen 80;          listen [::]:80;            server_name muhammed-aldulaimi.com www.muhammed-aldulaimi.com;      return 404; # managed by Certbot    }  

ufw status:

22/tcp                     ALLOW       Anywhere  Nginx Full                 ALLOW       Anywhere  22/tcp (v6)                ALLOW       Anywhere (v6)  Nginx Full (v6)            ALLOW       Anywhere (v6)  

iLo4 and AlmaLinux\Centos8 do not work properly

Posted: 11 Sep 2021 03:05 PM PDT

I have a HP DL360p G8 and I have updated all firwamres such as Bios, iLo and .. to the latest version but problem is when I open Virtual Console i can work with first connection and when I close that session and reopen it later I see this error: https://i.imgur.com/qurcNYa.png

and I can not work with virtual console in second time or future and I should reboot the server, any idea what is the problem? Thank you.

nginx defaulting to /usr/share/nginx/html instead of going to folder specifed in root

Posted: 11 Sep 2021 03:10 PM PDT

I am using letsencrypt, to set up ssl for a django project that is hosted on production mode using nginx and gunicorn

The OS(operating system) being used is: amazon Linux 2

Whenever I try to run the server, I keep getting the amazon Linux 2 default page, and when I check the error logs I get the following error:

2021/09/11 11:59:14 [error] 18402#18402: *1961 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 122.174.182.129, server: example.com, request: "GET /favicon.ico HTTP/2.0", host: "www.example.com", referrer: "https://www.example.com/"  

My nginx.conf looks like this:

Output of sudo nginx -T:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok  nginx: configuration file /etc/nginx/nginx.conf test is successful  # configuration file /etc/nginx/nginx.conf:  # For more information on configuration, see:  #   * Official English Documentation: http://nginx.org/en/docs/  #   * Official Russian Documentation: http://nginx.org/ru/docs/    user nginx;  worker_processes auto;  error_log /var/log/nginx/error.log;  pid /run/nginx.pid;    # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.  include /usr/share/nginx/modules/*.conf;    events {      worker_connections 1024;  }  http {      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                        '$status $body_bytes_sent "$http_referer" '                        '"$http_user_agent" "$http_x_forwarded_for"';        access_log  /var/log/nginx/access.log  main;        sendfile            on;      tcp_nopush          on;      tcp_nodelay         on;      keepalive_timeout   65;      types_hash_max_size 4096;        include             /etc/nginx/mime.types;      default_type        application/octet-stream;        # Load modular configuration files from the /etc/nginx/conf.d directory.      # See http://nginx.org/en/docs/ngx_core_module.html#include      # for more information.      include /etc/nginx/conf.d/*.conf;      server {        #if ($host = www.example.com) {      #    return 301 https://$host$request_uri;      #} # managed by Certbot     #     # if ($host = example.com) {     #     return 301 https://$host$request_uri;     # } # managed by Certbot          listen 80;      server_name example.com www.example.com;      root /home/ec2-user/buisness;      if ($host = www.example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot        if ($host = example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot      location = /favicon.ico { access_log off; log_not_found off; }        location /static {          root /home/ec2-user/buisness;      }        location / {          proxy_set_header Host $http_host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;          proxy_pass http://unix:/home/ec2-user/buisness/buisness.sock;      }    }      #  server {     #     listen       80;      #    listen       [::]:80;       #   server_name  _;       #   root         /home/ec2-user/buisness;            # Load configuration files for the default server block.  #        include /etc/nginx/default.d/*.conf;   #       error_page 404 /404.html;    #      location = /404.html {     #     }        #    error_page 500 502 503 504 /50x.html;       #   location = /50x.html {        #  }      #}    # Settings for a TLS enabled server.        server {          listen       443 ssl http2;          listen       [::]:443 ssl http2;          server_name  example.com www.example.com;          root         /home/ec2-user/buisness/;              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot          ssl_session_cache shared:SSL:1m;          ssl_session_timeout  10m;          ssl_ciphers HIGH:!aNULL:!MD5;          ssl_prefer_server_ciphers on;            # Load configuration files for the default server block.          include /etc/nginx/default.d/*.conf;          error_page 404 /404.html;              location = /40x.html {          }           error_page 500 502 503 504 /50x.html;              location = /50x.html {          }  }    }      # configuration file /etc/nginx/mime.types:    types {      text/html                                        html htm shtml;      text/css                                         css;      text/xml                                         xml;      image/gif                                        gif;      image/jpeg                                       jpeg jpg;      application/javascript                           js;      application/atom+xml                             atom;      application/rss+xml                              rss;        text/mathml                                      mml;      text/plain                                       txt;      text/vnd.sun.j2me.app-descriptor                 jad;      text/vnd.wap.wml                                 wml;      text/x-component                                 htc;        image/png                                        png;      image/svg+xml                                    svg svgz;      image/tiff                                       tif tiff;      image/vnd.wap.wbmp                               wbmp;      image/webp                                       webp;      image/x-icon                                     ico;      image/x-jng                                      jng;      image/x-ms-bmp                                   bmp;        font/woff                                        woff;      font/woff2                                       woff2;        application/java-archive                         jar war ear;      application/json                                 json;      application/mac-binhex40                         hqx;      application/msword                               doc;      application/pdf                                  pdf;      application/postscript                           ps eps ai;      application/rtf                                  rtf;      application/vnd.apple.mpegurl                    m3u8;      application/vnd.google-earth.kml+xml             kml;      application/vnd.google-earth.kmz                 kmz;      application/vnd.ms-excel                         xls;      application/vnd.ms-fontobject                    eot;      application/vnd.ms-powerpoint                    ppt;      application/vnd.oasis.opendocument.graphics      odg;      application/vnd.oasis.opendocument.presentation  odp;      application/vnd.oasis.opendocument.spreadsheet   ods;      application/vnd.oasis.opendocument.text          odt;      application/vnd.openxmlformats-officedocument.presentationml.presentation                                                       pptx;      application/vnd.openxmlformats-officedocument.spreadsheetml.sheet                                                       xlsx;      application/vnd.openxmlformats-officedocument.wordprocessingml.document                                                       docx;      application/vnd.wap.wmlc                         wmlc;      application/x-7z-compressed                      7z;      application/x-cocoa                              cco;      application/x-java-archive-diff                  jardiff;      application/x-java-jnlp-file                     jnlp;      application/x-makeself                           run;      application/x-perl                               pl pm;      application/x-pilot                              prc pdb;      application/x-rar-compressed                     rar;      application/x-redhat-package-manager             rpm;      application/x-sea                                sea;      application/x-shockwave-flash                    swf;      application/x-stuffit                            sit;      application/x-tcl                                tcl tk;      application/x-x509-ca-cert                       der pem crt;      application/x-xpinstall                          xpi;      application/xhtml+xml                            xhtml;      application/xspf+xml                             xspf;      application/zip                                  zip;        application/octet-stream                         bin exe dll;      application/octet-stream                         deb;      application/octet-stream                         dmg;      application/octet-stream                         iso img;      application/octet-stream                         msi msp msm;        audio/midi                                       mid midi kar;      audio/mpeg                                       mp3;      audio/ogg                                        ogg;      audio/x-m4a                                      m4a;      audio/x-realaudio                                ra;        video/3gpp                                       3gpp 3gp;      video/mp2t                                       ts;      video/mp4                                        mp4;      video/mpeg                                       mpeg mpg;      video/quicktime                                  mov;      video/webm                                       webm;      video/x-flv                                      flv;      video/x-m4v                                      m4v;      video/x-mng                                      mng;      video/x-ms-asf                                   asx asf;      video/x-ms-wmv                                   wmv;      video/x-msvideo                                  avi;  }  

When I run sudo nginx -t : I get no errors

PS: This is my first time hosting ssl with letsencrypt and nginx, so I am not sorry if the .conf file looks very clustered.

Any help would be highly appreciated, thank you!

AIX Samba user access getpwuid failed

Posted: 11 Sep 2021 10:18 PM PDT

I have installed Samba 4.12.10 via yum in AIX 7.2. I have also installed kerberos package to authenticate samba with kerberos.

My objective is to allow users access of folders/files in AIX from their windows machines.

# yum list installed | grep samba  samba.ppc 4.12.10-2 @AIX_Toolbox_72  samba-client.ppc 4.12.10-2 @AIX_Toolbox_72  samba-common.ppc 4.12.10-2 @AIX_Toolbox_72  samba-devel.ppc 4.12.10-2 @AIX_Toolbox_72  samba-libs.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind-clients.ppc 4.12.10-2 @AIX_Toolbox_72    # yum list installed | grep winbin  samba-winbind.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind-clients.ppc 4.12.10-2 @AIX_Toolbox_72    # yum list installed | grep krb5  krb5-devel.ppc 1.18.3-1 @AIX_Toolbox  krb5-libs.ppc 1.18.3-1 @AIX_Toolbox  krb5-server.ppc 1.18.3-1 @AIX_Toolbox  krb5-server-ldap.ppc 1.18.3-1 @AIX_Toolbox  krb5-workstation.ppc 1.18.3-1 @AIX_Toolbox  

However, when I try to access the AIX server in windows file explorer: \\pc96p9 (pc96p9 is my AIX machine name) It is showing access is denied even through a correct domain username and password is provided.

Then I checked the samba log from /etc/samba/log.10.161.139.74 (10.161.139.74 is the windows machine accessing AIX), I get the following error:

[2021/03/26 12:07:51.353238, 0] ../../source3/auth/token_util.c:567(add_local_groups)  add_local_groups: SID S-1-5-21-2693943023-2014060074-1703039353-34220 -> getpwuid(100000) failed, is nsswitch configured?  [2021/03/26 12:07:51.353328, 3] ../../source3/auth/token_util.c:403(create_local_nt_token_from_info3)  Failed to add local groups  [2021/03/26 12:07:51.353351, 1] ../../source3/auth/auth_generic.c:174(auth3_generate_session_info_pac)  Failed to map kerberos pac to server info (NT_STATUS_NO_SUCH_USER)  [2021/03/26 12:07:51.353424, 3] ../../source3/smbd/smb2_server.c:3280(smbd_smb2_request_error_ex)  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] || at ../../source3/smbd/smb2_sesssetup.c:146  [2021/03/26 12:07:51.354653, 3] ../../source3/smbd/server_exit.c:250(exit_server_common)  Server exit (NT_STATUS_CONNECTION_RESET)  

Here is my /etc/krb5.conf:

[logging]  default = FILE:/var/log/krb5libs.log  kdc = FILE:/var/log/krb5kdc.log  admin_server = FILE:/var/log/kadmind.log    [libdefaults]  default_realm = MY-OA.MY.ORG.HK  dns_lookup_realm = false  dns_lookup_kdc = false  ticket_lifetime = 24h  renew_lifetime = 7d  forwardable = true    [realms]  MY-OA.MY.ORG.HK = {  kdc = MYIFS28.MY-OA.MY.ORG.HK  admin_server = MYIFS28.MY-OA.ORG.HK  }    [domain_realm]  .my.org.hk = MY.ORG.HK  my.org.hk = MY.ORG.HK  

Here is my /etc/samba/smb.conf:

[global]          realm = my-oa.my.org.hk          netbios name = pc96p9          workgroup = MY-OA          realm = MY-OA.MY.ORG.HK          password server = 10.67.1.92          server services = rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate, dns, s3fs          security = ads          idmap uid = 100000-200000          idmap gid = 100000-200000          template homedir = /home/%U          template shell = /usr/bin/bash          winbind use default domain = yes          winbind offline logon = false          winbind enum users = yes          winbind enum groups = yes          domain master = no          local master = no          preferred master = no          socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_SNDBUF=32768 SO_RCVBUF=32768          os level = 0          wins server = 10.67.1.92          encrypt passwords = yes          server signing = auto          log file = /var/log/samba/log.%m          log level = 3          max log size = 50    [data]          comment = Public Data Share          path = /data1/winshare          public = yes          writable = yes          inherit acls = yes          inherit permissions = yes          printable = no  

And here is my /etc/nsswitch.conf:

passwd:     files winbind  shadow:     files winbind  group:      files winbind  hosts:     files dns wins  

Actually, we have samba 3.6 running fine in a AIX 7.1 production environment, the above 3 configuration files are directly copied from AIX 7.1 (samba 3.6) to the new AIX 7.2 (samba 4.12).

Can anyone please let me know if there is anything wrong in my samba configuration? Thanks in advance.

How do I delete the Conversation History folder on Office 365?

Posted: 11 Sep 2021 08:02 PM PDT

I'd like to know how do I delete the Conversation History folder on an Office 365 account?

I do not use Skype for Business, do not even use Outlook (I access the account for macOS Mail) and the folder is empty. Yet, I can't delete it from the web (the option is just greyed out) and on Mac it gives me the following error:

The operation couldn't be completed. (MCMailErrorDomain error 1030.) The server returned the error: The operation couldn't be completed. (MFEWSErrorDomain error 46.)

Does anyone know of a solution or a hack (using Powershell, etc) to delete or hide that useless folder for good? I have admin access on the account if that matters. I've searched around a little bit but most "solutions" I found are completely irrelevant.

Regards.

Centos server gets stuck on grub prompt

Posted: 11 Sep 2021 05:06 PM PDT

I have a server on OVH with Cpanel which has 2 x 2TB partitions and its lsblk command output looks like this. http://prntscr.com/nz4xd2 A few days back I ran some upgrades and after rebooting my server didn't boot properly and halted during the boot process and displayed a prompt like this:

https://prnt.sc/nz50jf

I did some research and I was able to boot the server by running following commands at the grub prompt:

set prefix=(hd0,gpt2)/boot/grub2  set root=(hd0,gpt2)  linux (hd0,gpt2)/boot/vmlinuz-4...…… root=/dev/md2 ro  boot  

Then I searched for a permanent solution for this which took me to some URLs like this https://www.linux.com/LEARN/HOW-RESCUE-NON-BOOTING-GRUB-2-LINUX which suggest to reinstall grub using commands:

# update-grub  # grub2-install /dev/sda  

I am on Centos which doesn't have the command update-grub but I was able to run other commands:

[root@server2 ~]# grub2-install /dev/sdb  Installing for i386-pc platform.  Installation finished. No error reported.  

after this when rebooting I got the same grub prompt again. I am not sure what is wrong here. Can anyone please suggest something?

Allow other AWS services to invoke Lambda using IAM

Posted: 11 Sep 2021 03:00 PM PDT

Is it possible to grant AWS services (e.g. API gateway, Secrets Manager) permission to invoke a Lambda function using only IAM roles? Normally this is done in the function's policy (resource-based policy), but I wonder if this is the only way. The advantage of using IAM is that one policy can allow multiple Lambdas to be executed, without the overhead of managing one policy per function. (Note that I am not asking about Lambda IAM +execution roles+, which determine permissions as the function executing.)

Documentation on the Lambda Permissions Model suggests that IAM roles can be used in place of Lambda function policies:

Instead of using a Lambda function policy, you can create another IAM role that grants the event sources (for example, Amazon S3 or DynamoDB) permissions to invoke your Lambda function. However, you might find that resource policies are easier to set up and they make it easier for you to track which event sources have permissions to invoke your Lambda function.

In my digging, however, I have not been able to achieve the advertised effect. I've tried to grant two two services permission to invoke Lambdas: API gateway and Secrets Manager. In both cases, I found these services require access to be granted within the function policy, not an IAM role.

Service 1: Secrets Manager

I am rotating RDS credentials in Secrets Manager. Normally, Secrets Manager creates Lambdas to perform the rotation once you configure the secret's rotation schedule, but in my case I did not like the long custom names of the Lambda functions and created my own. I attempted to grant Secrets Manager permission to invoke any Lambda using IAM roles, so I created the following role:

Trust relationship:

{    "Version": "2012-10-17",    "Statement": [      {        "Sid": "AllowApiGatewayToAssumeRole",        "Effect": "Allow",        "Principal": {          "Service": "apigateway.amazonaws.com"        },        "Action": "sts:AssumeRole"      }    ]  }  

Permissions:

{      "Version": "2012-10-17",      "Statement": [          {              "Sid": "AllowInvokeAnyLambda",              "Effect": "Allow",              "Action": "lambda:InvokeFunction",              "Resource": "*"          }      ]  }  

At the time of this writing, the AWS console does not have a way set a custom Lambda for rotating RDS credentials, so I used the CLI:

$ aws secretsmanager rotate-secret --secret-id 'rds/my-db/account' --rotation-lambda-arn 'arn:aws:lambda:us-east-1:xxxxxxxxxxxx:function:rotate-rds-secret' --rotation-rules AutomaticallyAfterDays=1    An error occurred (AccessDeniedException) when calling the RotateSecret operation: Secrets Manager cannot invoke the specified Lambda function. Ensure that the function policy grants access to the principal secretsmanager.amazonaws.com  

So it looks like Secrets Manager does not use this IAM role to invoke the Lambda. And there does not seem to be a way to configure Secrets Manager to use a specific IAM role.

Service 2: API Gateway

I am using API Gateway to call my Lambda function. API Gateway has two different integration types which support invoking a Lambda: (1) Lambda function and (2) AWS Service (https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-lambda.html).

When using Lambda function integration, it's the same story as with Secrets Manager—you need to use function policies to grant API Gateway invoke access. But with AWS Service integration, you actually specify which IAM role API Gateway should use to invoke the Lambda. This makes sense to me, because how would a service like API Gateway know which IAM role to use to invoke a Lambda? Yet there is no way to choose an IAM role when using the Lambda function integration. Or is there...


Cross-posted on: https://forums.aws.amazon.com/message.jspa?messageID=844660#844660

Ansible: access variables and dump to CSV file

Posted: 11 Sep 2021 04:03 PM PDT

vars:    servers:    - name: centos      port: 22  

tasks:    - name: Check if remote port    wait_for: host={{ item.name }} port={{ item.port }} timeout=1    ignore_errors: True    register: out    with_items: "{{ servers }}"    - debug: var=out    - name: Save remote port       shell: echo "{{ item.host }}" > /tmp/x_output.csv    args:      executable: /bin/bash    with_items: "{{ out.results }}"  

OUTPUT

PLAY [all] **************************************************************************************************************************    TASK [Gathering Facts] **************************************************************************************************************  ok: [centos]    TASK [telnet : Check if remote port] ************************************************************************************************  ok: [centos] => (item={u'name': u'centos', u'port': u'22'})    TASK [telnet : debug] ***************************************************************************************************************  ok: [centos] => {      "out": {          "changed": false,          "msg": "All items completed",          "results": [              {                  "_ansible_ignore_errors": true,                  "_ansible_item_result": true,                  "_ansible_no_log": false,                  "_ansible_parsed": true,                  "changed": false,                  "elapsed": 0,                  "failed": false,                  "invocation": {                      "module_args": {                          "active_connection_states": [                              "ESTABLISHED",                              "FIN_WAIT1",                              "FIN_WAIT2",                              "SYN_RECV",                              "SYN_SENT",                              "TIME_WAIT"                          ],                          "connect_timeout": 5,                          "delay": 0,                          "exclude_hosts": null,                          "host": "centos",                          "msg": null,                          "path": null,                          "port": 22,                          "search_regex": null,                          "sleep": 1,                          "state": "started",                          "timeout": 1                      }                  },                  "item": {                      "name": "centos",                      "port": "22"                  },                  "path": null,                  "port": 22,                  "search_regex": null,                  "state": "started"              }          ]      }  }    TASK [telnet : Save remote port] ****************************************************************************************************  fatal: [centos]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'host'\n\nThe error appears to have been in '/home/xxxxxx/ansible/tso-playbook/roles/telnet/tasks/main.yml': line 17, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Save remote port\n  ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'host'"}          to retry, use: --limit @/home/xxxxxxx/ansible/tso-playbook/telnet.retry    PLAY RECAP **************************************************************************************************************************  centos                     : ok=3    changed=0    unreachable=0    failed=1  

Note: it's my first time to post here, don't know how to fix line by line properly... I just want to access the out.host which is 'centos' and save it on the csv file, of course I need to do more, but this is the first thing I need to do, please help! Thanks.

---    - name: Save remote port       shell: echo {{ item.changed }} > /tmp/x_output.csv      args:      executable: /bin/bash      with_items: "{{ out.results }}"  

This is the only one I can refer, item.changed which is "False" but all others I can't.

Why?

How to force Docker to rerun `apt-get update`?

Posted: 11 Sep 2021 08:34 PM PDT

Some security updates have just come out and I want to rebuild my Docker images to take advantage of the updates.

However when I run docker build . it completes immediately without updating anything because nothing has changed in the Dockerfile, and everything is cached. It doesn't even try to run the apt-get update line in my Dockerfile.

How can I force Docker to run the apt-get update command again, even though nothing has changed?

There is a --no-cache option that says it won't use the cache during the build, but I want it to use the cache for the commands before apt-get update and I want the results saved into the cache for the next run (replacing the currently cached images), so I definitely want to be using the cache.

I also can't use docker rmi to remove the image generated at the point after apt-get has been run, because it refuses to delete this image as the image has dependent child images.

Unable to Set local group policy using powershell

Posted: 11 Sep 2021 03:00 PM PDT

I try to set local group policy using PowerShell. My goal is to enable

Disable changing home page settings

Below is the PowerShell script:

# Set the values as needed  $dchps_control_panel = "HKCU:\Software\Policies\Microsoft\Internet Explorer\Control Panel"  $dchps_main = "HKCU:\Software\Policies\Microsoft\Internet Explorer\Main"  $homepage = "HomePage"  $enabled = 1  $startpage = "Start Page"  $startpage_value = "www.google.com"    # Test if the Registry Key exists already  if(-not (Test-Path $dchps_control_panel) -or -not (Test-Path $dchps_main) )  {      # Create Control Panel Key to enable and Main Key      New-Item -Path $dchps_control_panel -Force      New-Item -Path $dchps_main -Force  }    # Enable Disable changing home page settings  Set-ItemProperty -Path $dchps_control_panel -Name $homepage -Value $enabled -Type DWord  #  # Set Start Page  Set-ItemProperty -Path $dchps_main -Name $startpage  -Value $startpage_value -Type String  

The registry keys both get created. However, when I check the "gpedit.msc" the setting still remains to disable and nothing was configured.

Thanks

Creating bootable USB thumb drive from PXE boot files

Posted: 11 Sep 2021 10:03 PM PDT

Looked around and Googled but could not find a similar question. It seems most people want to take a bootable USB thumb stick and convert it into a PXE boot image. I actually need to go in the opposite direction, that is, create a bootable USB thumb stick from PXE boot files.

I have a PXE server used for disk imaging. Some devices on my network are not able to PXE boot because 1) PXE is already being used by something else on their subnet or 2) their network adapter doesn't support PXE. My only option would be to grab whatever files are being used to PXE boot the device and try to make a bootable USB thumb drive.

I have access to the PXE server which is using PXELinux. Here's what's in the "default" file being used by PXELinux.

default imaging  prompt 0  noescape 1    label imaging  kernel kernel/bzImage  append initrd=kernel/init.gz root=/dev/ram0 rw ramdisk_size=127000 ip=dhcp dns=10.10.10.5 storage=10.10.10.211:/imaging/images/ driversstorage=10.10.10.211:/imaging/drivers/ imaging_server=10.10.10.211:20444 symmetric_key=KsqRwghBK+l/LGQ83kOp3Gl8Xos9mrTItQ69MJabgAv5DqcKakVCwNpE4QJ+A9zzDoSAhdREIVK4lkUZP67XXg loglevel=4a  

I'm mostly a Windows/Mac guy but can get around in Linux and am comfortable running command lines in a terminal. I know next to nothing about the Linux boot process nor how to make a bootable USB thumb drive. I gather from the above PXELinux config file that the bzImage and init.gz files are needed to PXE boot a client device. Is there a way to use these 2 files and the info from the PXELinux configuration to create a bootable USB thumb stick?

Thanks

"Access is denied" message after logging in to windows 2012 r2 via remote desktop

Posted: 11 Sep 2021 08:24 PM PDT

I installed the Remote Desktop Session Host server role on a windows 2012 r2 machine. After that, I rebooted the VM and now when I try to log in via remote desktop, after a little while in the log in process, I get an "Access is denied" yellow message with an OK button below. I was able to log in before that.

Any idea on how I can fix this? I am local admin on the server but I cannot launch remotely the Server Manager. I was able to run the Computer Management console.

Thanks

add package to WinPE using DISM failes

Posted: 11 Sep 2021 07:02 PM PDT

I'm having trouble with adding a package to a custom WinPe file.

I try to add a package (using: dism /image:c:\temp\mount /Add-Package /PackagePath:"C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab" in a command prompt with administrative privileges) I get this message:

An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003 An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003

Error: 3

An error occurred trying to open - C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab Error: 0x80070003

When I look in the dism.log I see this:

Incorrect parameter C:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe_scripting.cab - path not found -

However, I checked the path and there is no error in it. Also in the dism.log there is this error:

DISM DISM Package Manager: PID=3564 TID=4204 Failed to get the underlying CBS package. - CDISMPackageManager::OpenPackageByPath(hr:0x80070003)

I have no clue of what that is.

Can somebody help me with adding packages to a WinPe custom wim image?

Thanks in advance.

Jack

Firewall Ports + HaProxy

Posted: 11 Sep 2021 04:03 PM PDT

Using HaProxy as a SSH load balancer, We have HaProxy running in IP1:2222 and redirecting to IP1:2223-2233 and IP2:2223-2233.

In this case, do I need to open Firewall Ports from

  • Source IP:2222,2223-2233 to IP1:2223-2233 and IP2:2223-2233 or
  • Source IP:2222 to IP1:2222 and IP1:2222 to IP1:2223-2233 and IP2:2223-2233?

When I trace the route I don't see the request forwarding from LB to actual targets in the sftp/ssh -vvv logs.

Restarting process gives 'start: Rejected send message' error

Posted: 11 Sep 2021 05:06 PM PDT

In a VM procured through Azure Cloud Services, the status of a process called walinuxagent upon doing service walinuxagent status is:

walinuxagent stop/waiting

Next I do service walinuxagent start. I end up getting:

start: Rejected send message, 1 matched rules; type="method_call", sender=":1.1551" (uid=1000 pid=59402 comm="start walinuxagent ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init ")

Can someone help out with diagnosing and fixing this issue? Essentially, I added swapfile settings to walinuxagent, and I'm now trying to restart it.

How to use HAProxy in load balancing and as a reverse proxy with docker?

Posted: 11 Sep 2021 10:03 PM PDT

I am using HAProxy docker image to load share between multiple similar containers. It's working fine if I am using a single address like web.abc.com to query only the API containers, which is mapped to my localhost . But now I want to use api.abc.com too with this HAProxy config file .

So the scenario is going to be if I hit web.abc.com it will share the load of web application container using round robin and show me the contents of these containers and if I hit api.abc.com it gives me access of those containers which have the API .

I have tried multiple config changes with my haproxy.cfg , but it's not working .

This is my docker-compose file, and this is the haproxy.cfg I am using , which is obviously not working

Is this scenario even possible with HAProxy ? Help me.

How to increase the disk write buffer for rsync on Mac OS X?

Posted: 11 Sep 2021 09:09 PM PDT

I have to copy some TB of data from one external drive to another (mostly ~400 MB files). The source is NTFS-formatted USB2 and the best read speed I can get from it is 7MB/s. The target is HFS+ USB3 and the write speed is considerably higher than that.

The NTFS filesystem isn't OK, and only NTFS-3g (slow, userspace) can read all the files in it, not even Windows can do it or I'd use that.

I'm using rsync -va /Volumes/source /Volumes/target to perform the copy.

Is there a way, in rsync or OS X, to cache the writes, so that the target doesn't need to be permanently writing at a slow speed for days? Something like filling 1GB or 2GB or even 4GB of RAM before writing? Thus the target drive would spend a few days doing

write - rest - rest - rest - rest - rest  write - rest - rest - rest - rest - rest  write - rest - rest - rest - rest - rest  ...  

instead of spending all those days continuously writing at a slow speed.

Another possibility, given that most files are large, would be to have rsync only write any given file after it's finished reading it, but I found no way to do that either.

Or is it better to just spend the whole time continuously writing at a slow speed?

Installing MariaDB along with MySQL: Could not find mysqld

Posted: 11 Sep 2021 09:09 PM PDT

I am following the instruction on the MariaDB site here all is fine until I get to the part shown below.

[root@squir scripts]# mysql_install_db --defaults-file=/mariadb/data/my.cnf    FATAL ERROR: Could not find mysqld    The following directories were searched:        /mariadb/mariadb/libexec      /mariadb/mariadb/sbin      /mariadb/mariadb/bin    If you compiled from source, you need to run 'make install' to  copy the software into the correct location ready for operation.    If you are using a binary release, you must either be at the top  level of the extracted archive, or pass the --basedir option  pointing to that location.  

I wonder if there is the problem with the directory structure that is confusing the install. The documentation is also old, so I am not sure if there is a step missing.

What I Have done So far

  1. Downloaded the MariaDB.tar.gz
  2. Created a directory called /mariadb/
  3. Extracted the MariaDB.tar.gz and created a symbolic link called mariadb
  4. Created a user group and username for MariaDB directory
  5. Created /mariadb/data for storing MariaDB data file (Database)
  6. Copy the pre-written configuration file by this command.

    cp mariadb/support-files/my-medium.cnf mariadb/data/my.cnf

  7. Edited the my.cnf file as per instructed in here

    [client] port = 3307 socket = /mariadb/data/mariadb.sock

    [mysqld] datadir = /mariadb/data basedir = /mariadb/mariadb port = 3307 socket = /mariadb/data/mariadb.sock user = mariadb

  8. Copy the init.d script from support files in the right location:

  9. Edit /etc/init.d/mariadb replacing mysql with mariadb
  10. Run mysql_install_db by explicitly giving it the my.cnf file as argument:

    scripts/mysql_install_db --defaults-file=/opt/mariadb-data/my.cnf

Although on step 10, I can't follow this command. So I did what I stated above and shown the error.

Application Error 0xc0000142 on Windows Server 2008 R2 after crash

Posted: 11 Sep 2021 06:03 PM PDT

Our server crashed a few days ago, cause was a HDD failure. The server seems to be almost fully functional (except that the Exchange DB is corrupt now, but that's another story)

The main problem now is that Windows system applications like cmd.exe, sfc.exe etc. won't run because of error "The application failed to initialize properly (0xC0000142)"(STATUS_DLL_INIT_FAILED).

We have not found a way to fix this error. We've been reading every single post about this error code on google but we found no solution yet. Maybe an expert around that could give some tips and hints?

The most unfortunate thing is that the backup system failed silently 1 month ago. The last bare metal backup is 1 month old, and so is the Exchange db and everything else.

Now I have this semi working server in front of me and I don't know how to fix this mess.

My plan now is to install a fresh Windows Server on a new pair of HDDs, manually reconstruct active directory user and then try to somehow restore the Exchange DB and make it match, but after some researching this seems to be almost impossile. A cross forrest migration is not possible because the crashed server refuses to run anything that requires the command line, so I can't run all the commands and scripts required in a cross forrest migration.

I believe I'm screwed, right? We have 15 users and the Exchange 2010 DB is 16GB big, by the way.

Any help would be really appreciated.

Cannot Connect to VSFTP outside of network

Posted: 11 Sep 2021 06:03 PM PDT

I am having a hair pulling issue with my VSFTPD. I am not sure where to turn and I have went through to make sure everything is working properly and when trying to connect to ftp using ftp localhost I am able to login with the username and password I have specified. When I try to connect from outside I get the prompt Connected to domainname.com. but no prompt for user and password in addition when using an FTP client it hangs up and never connects.

The server is running Ubuntu 12.04 LTS and VSFTPD 2.3.5

Here is the output of running iptables -L : http://pastie.org/4892233

Here is he output when running ps -FC vsftpd :

root     14343     1  0  1168   984   3 16:55 ?        00:00:00 /usr/sbin/vsftpd  

Here is output of running netstat -tlpn | grep vsftpd :

tcp6       0      0 :::21                   :::*                    LISTEN      14343/vsftpd      

I have uninstalled and reinstalled many times and tried several different configurations and am at a complete loss on why this may not be working. We very often use the same configuration on the same type of servers with no issues.

Thank you in advance for your help.

puppet parameterised classes

Posted: 11 Sep 2021 06:30 PM PDT

I am having trouble getting parameterised classes working in puppet 2.6.4 (client and master)

######## from /etc/puppet/manifests/nodes.pp   # defining one node to use certain version  #######################################################  node 'dev-internal-000008.domain.com' {           include software($version="dev-2011.02.11")  }  
# from /etc/puppet/modules/software/manifests/init.pp

I am setting the version here as the "default"

#
class software($version="dev-2011.02.04b") {    File {      links => follow    }      file { "/opt/software_AIR":      ensure => directory    }      file { "/opt/software_AIR/share":      source => "puppet://puppet/software/air/$version",      recurse => "true",    }  }  
#

errors from puppet master

#
err: Could not parse for environment production: Syntax error at '='; expected ')' at /etc/puppet/manifests/nodes.pp:10 on node dev-internal-domain.com  
#

found a fix for this

try

node 'dev-internal-000008.domain.com' {    class { customsoftware:version => "dev-2011.02.04b" }  }  

VPN is working, except for DNS lookups. Firewall (Cisco ASA 5505) issue?

Posted: 11 Sep 2021 07:02 PM PDT

I've got the following set up:

LAN ->  DHCP / DNS / VPN server (OSX 10.6) -> Cisco ASA 5505 -> WAN  

Connecting to the LAN via VPN works fine. I get all the details properly and I can ping any host on the internal network using their IP. However, I can't do any host lookups whatsoever. I've looked through the logs on and found this nugget in the firewall log:

3   Sep 08 2010 10:46:40    305006  10.0.0.197  65371           portmap translation creation failed for udp src inside:myhostname.local/53 dst inside:10.0.0.197/65371  

Port 53 is DNS services, no? Because of that log entry, I'm thinking that the issue is with the firewall, not the server. Any ideas? Please keep in mind that I have very little knowledge and experience with this kind of firewall and the little experience I do have is with the ASDM GUI console, not the CLI console.

No comments:

Post a Comment