Monday, April 12, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Trying to find someone causing trouble on my network. Need to convert public to private IP

Posted: 12 Apr 2021 09:15 PM PDT

Somebody has started a instagram page in my hotel which is being used to bully and damage peoples reputation. I am trying to find them. Seeming they are connected to my internet is there any way I can grab their public ip through a link and then convert that to a private IP address that I can use to match up with a hostname of their device?

Unable to update kafka cluster version in AWS MSK

Posted: 12 Apr 2021 09:12 PM PDT

We have written python code to upgrade kafka version in AWS MSK and its giving error

..................................................  Traceback (most recent call last):    File "./update_kafka_version.py", line 71, in <module>      update_kafka_version(name, targetKafkaVersion)    File "./update_kafka_version.py", line 40, in update_kafka_version      update_kafka_version_response = client.update_cluster_kafka_version(    File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 357, in _api_call      return self._make_api_call(operation_name, kwargs)    File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 676, in _make_api_call      raise error_class(parsed_response, operation_name)  botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the UpdateClusterKafkaVersion operation: The specified parameter value is identical to the current value for the cluster. Specify a different value, then try again.    

As per from the boto3 documentation for kafka

https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.update_cluster_kafka_version

response = client.update_cluster_kafka_version(      ClusterArn='string',      ConfigurationInfo={          'Arn': 'string',          'Revision': 123      },      CurrentVersion='string',      TargetKafkaVersion='string'  )  

We have stored kafka zookeeper endpoints, bootstrap nodes, cluster arn and cluster version in parameter store and fetching the cluster arn from parameter store. We are fetching current version using describe-cluster but it's still giving error that the specified current version matches with the current cluster value.

python module for boto3 in my laptop

boto3 1.17.27 botocore 1.20.44

Any advise on fix this issue will be highly appreciated.

How to create User mongodb with specify action

Posted: 12 Apr 2021 08:29 PM PDT

My db have those Collection : users , transactions, balances,... Each collection manager by one VPS. How can i create user for each collection with specify action? Example VPS A can read only collection user but can't read password field which is have in users. VPS B can read / create collection balances but can't edit or delete it. Thanks!

Can not see the new installed kernel during the boot screen

Posted: 12 Apr 2021 10:16 PM PDT

We have upgraded to RHEL7.6- 7.9 also the new kernel has been installed(kernel-3.10.0-1160.15.2.el7.x86_64) but we are unable to see the new kernel menu entry during the boot menu screen.

As I have checked, the new kernel is installed and show in the grub.cfg file and reinstalled the kernel multiple times, but it does not show the new kernel during the boot screen.

df /boot

/dev/sda1                          487652   209315    248641  46% /boot   

installed kernel versions (version 1160 is the new kernel from which we want to boot)

abrt-addon-kerneloops-2.1.11-52.el7.x86_64                  Wed Jul 10 19:11:44 2019<  kernel-3.10.0-957.38.3.el7.x86_64                           Sat Nov 23 15:16:16 2019  kernel-3.10.0-957.58.2.el7.x86_64                           Fri Aug 28 23:01:22 2020  **kernel-3.10.0-1160.15.2.el7.x86_64                          Sat Mar  6 17:41:02 2021**  kernel-headers-3.10.0-1160.15.2.el7.x86_64                  Sat Mar  6 17:40:25 2021  kernel-tools-3.10.0-1160.15.2.el7.x86_64                    Sat Mar  6 17:42:54 2021  kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64               Sat Mar  6 17:38:40 2021  libreport-plugin-kerneloops-2.1.11-42.el7.x86_64            Wed Jul 10 19:15:51 2019  

I have also changed the default kernel but it is not working either.

Any workaround for the issue?

issue with SPF records for website hosted on GCP

Posted: 12 Apr 2021 10:13 PM PDT

I am new to GCP. I have bought a domain from GoDaddy and hosted my website on GCP. My website is built on WordPress. I have contact forms on WordPress and when customer visit my website and leave a message and their emails beside right design I ado not get the emails on designated email address.

My understanding is this is due to SPF records, as my email provider (Zoho) also mentions that SPF records are not correct. Requirement from Zoho:

v=spf1 include:zoho.com.au ~all  

Record in my GCP DNS Zone:

Mydomain(domainname)    TXT     600      "v=spf1" "include:zoho.com.au" "~all"  

While contacting Zoho support, they mentioned this is strange I should contact DNS provider.

Can you suggest the solution?

Kerberos status says masked for the kdc server

Posted: 12 Apr 2021 08:18 PM PDT

Failed to restart krb5-admin-server.service: Unit krb5-admin-server.service is masked.```    ```sudo systemctl status krb5-kdc.service   ● krb5-kdc.service       Loaded: masked (Reason: Unit krb5-kdc.service is masked.)       Active: inactive (dead) since Tue 2021-04-13 02:42:45 UTC; 26min ago     Main PID: 477 (code=exited, status=0/SUCCESS) ```    I am setting up a kdc server and client in ubuntu and this part was active earlier. How can I fix this error?  

Kubernetes - vSphere Cloud Provider

Posted: 12 Apr 2021 07:55 PM PDT

I'm following this doc https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html

I am using a load balancer as my ControlPlaneEndpoint, now I would like to join a new master to the cluster passing the cloud-provider flag as well, through the below method it was possible join the workers however I can't do the same with a new Master.

kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' > discovery.yaml

# tee /etc/kubernetes/kubeadminitworker.yaml >/dev/null <<EOF  apiVersion: kubeadm.k8s.io/v1beta1  caCertPath: /etc/kubernetes/pki/ca.crt  discovery:    file:      kubeConfigPath: /etc/kubernetes/discovery.yaml    timeout: 5m0s    tlsBootstrapToken: y7yaev.9dvwxx6ny4ef8vlq  kind: JoinConfiguration  nodeRegistration:    criSocket: /var/run/dockershim.sock    kubeletExtraArgs:      cloud-provider: external  EOF  

Thanks

NLB with NGINX controller on my EKS cluster Each service I deploy creates its own NLB instead of using the existing one

Posted: 12 Apr 2021 07:10 PM PDT

I am trying to use an NLB with NGINX controller on my EKS cluster. Each service I deploy creates its own NLB instead of using the existing one. Here's what I'm doing, please help me where I'm going wrong kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.40.2/deploy/static/provider/aws/deploy.yaml Applying this deployment, service and Ingress

apiVersion: v1  kind: Service  metadata:    name: wordpress    namespace: wordpress    labels:      app: wordpress    annotations:      kubernetes.io/ingress.class: "nginx"      service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip  spec:    type: LoadBalancer    ports:      - port: 80        targetPort: 8282        protocol: TCP        name: http    selector:      app: wordpress  ---  apiVersion: extensions/v1beta1  kind: Ingress  metadata:    namespace: wordpress    name: wordpress-ingress    annotations:      kubernetes.io/ingress.class: "nginx"  spec:    rules:    - host: wp.somedomain.com      http:        paths:          - path: /            backend:              serviceName: wordpress              servicePort: 8282    ---  apiVersion: apps/v1  kind: Deployment  metadata:    name: wordpress    namespace: wordpress    labels:      app: wordpress  spec:    selector:      matchLabels:        app: wordpress    strategy:      type: Recreate    template:      metadata:        labels:          app: wordpress      spec:        containers:        - image: wordpress:4.8-apache          name: wordpress          env:          - name: WORDPRESS_DB_HOST            value: wordpress-mysql          - name: WORDPRESS_DB_PASSWORD            valueFrom:              secretKeyRef:                name:                 key:           ports:          - containerPort: 80            name: wordpress  

Result:

$ k get svc -n wordpress  NAME              TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)        AGE  wordpress         LoadBalancer   10.100.172.45   a35bf4xxxxxxxx-bf4b23b8054fxxxx.elb.country-region-1.amazonaws.com   80:32639/TCP   9s  wordpress-mysql   ClusterIP      None            <none>                                                                          3306/TCP       3d10h  $ k get ing -n wordpress  NAME                CLASS    HOSTS            ADDRESS                                                                         PORTS   AGE  wordpress-ingress   <none>   wp.somedomain.com   a2ab0axxxxxx-3d81cde05328xxxx.elb.country-region-1.amazonaws.com   80      18s  

How to create a scheduled task via GPO that runs at startup as SYSTEM with highest privileges for only certain machines

Posted: 12 Apr 2021 07:09 PM PDT

All of our workstations in the building are cabled with CAT5e, and because of the way things were built at construction time it's going to be prohibitively expensive to swap out the cabling for something that can handle gigabit speeds. (Yes, I know that theoretically in a perfect world CAT5e should handle gigabit, but in our experience this has resulted in file corruption.)

This hasn't been too much of a problem so far, as we've been running under a 10/100 switch. But now we have a separate need to bump up to gigabit. We'll be using a managed switch so that we can limit those workstations' ports to 100 Mbps Full Duplex to match the cabling.

In order to avoid duplex mismatch, we're also going to have to set the NICs on those machines to match the switch for those ports. I've worked up a small PowerShell script that does this quite nicely.

Get-NetIPAddress -AddressFamily IPv4 -IPAddress 192.168.0.* | ForEach {    Get-NetAdapter -InterfaceIndex $_.InterfaceIndex | Where { $_.Status -eq 'Up' } | ForEach {      $Property = Get-NetAdapterAdvancedProperty -Name $_.Name -DisplayName '*Duplex*'      $Value = $Property.ValidDisplayValues | Where { $_ -match '100' -and $_ -match 'Full' }      $Name = $Property.DisplayName        If ( (Get-NetAdapterAdvancedProperty -Name $_.Name).DisplayValue -ne $Value ) {        Set-NetAdapterAdvancedProperty -Name $_.Name -DisplayName $Name -DisplayValue $Value      }    }  }  

But the script must be run as admin, at startup with Highest Privileges and under NT Authority\SYSTEM. And only for the computers that are in the AD security group I've created.

I've tried using GPO to create a Scheduled Task for this, as discussed here and here, but the task is never created. Nothing related shows up in the workstation's event logs.

I need it as a GPO Preference under User Configuration\Preferences\Control Panel Settings\Scheduled Tasks, so that I can use item-level targeting and point to the security group.

I'm not dead-set on accomplishing the task in this particular way, so if someone has an alternate idea I'm willing to consider it.

But in the meantime, how can I get this Scheduled Task created on these workstations (without going around to everyone and doing it manually)?

Subversion SSL handshake failed and 408 error code

Posted: 12 Apr 2021 06:28 PM PDT

Versions

Subversion: version 1.6.11 (r934486)

Operating System: CentOS release 6.8 (Final)

Background

I have a variety of shell scripts that run as cronjobs on a CentOS machine. The shell scripts commit files to and checkout files from Subversion. Today all my scripts started failing with the following error

svn: OPTIONS of 'https://svn.int.mydomain.edu/eas': SSL handshake failed: SSL alert received: Error in protocol version (https://svn.int.mydomain.edu)

As a troubleshooting step I have ran the following command

openssl s_client -connect svn.int.mydomain.edu:443  

And received the following output ( redacted slightly )

CONNECTED(00000003)  ---  Certificate chain   0 s:/OU=Domain Control Validated/CN=*.int.mydomain.edu     i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2   1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2     i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2   2 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2     i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority   3 s:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority     i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority  ---  Server certificate  -----BEGIN CERTIFICATE-----  REDACTED  -----END CERTIFICATE-----  subject=/OU=Domain Control Validated/CN=*.int.mydomain.edu  issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2  ---  No client certificate CA names sent  Server Temp Key: ECDH, prime256v1, 256 bits  ---  SSL handshake has read 5545 bytes and written 373 bytes  ---  New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384  Server public key is 2048 bit  Secure Renegotiation IS supported  Compression: NONE  Expansion: NONE  SSL-Session:      Protocol  : TLSv1.2      Cipher    : ECDHE-RSA-AES256-GCM-SHA384      Session-ID: REDACTED      Session-ID-ctx:       Master-Key: REDACTED      Key-Arg   : None      Krb5 Principal: None      PSK identity: None      PSK identity hint: None      Start Time: 1618275549      Timeout   : 300 (sec)      Verify return code: 0 (ok)  ---  HTTP/1.1 408 Request Time-out  content-length: 110  cache-control: no-cache  content-type: text/html  connection: close    <html><body><h1>408 Request Time-out</h1>  Your browser didn't send a complete request in time.  </body></html>  closed  

As you can see I am getting a HTTP/1.1 408 Request Time-out at the end of the standard output. I can verify I have access to https://svn.int.mydomain.edu on this box, because a separate installation of svn works fine from that box ( an installation of SVN that came with a Jenkins plugin ).

Question

Does anyone have any thoughts on additional troubleshooting techniques here? I've tried to search for this issue but there is no fruitful responses.

Why does Apache server return 404 on subfolder, when it was previously working

Posted: 12 Apr 2021 07:54 PM PDT

I just installed some new SSL from GoDaddy on my Apache Ubuntu server.

I then restarted via SSH and everything looks good.

The root site (a wordpress install) now loads fine with https.

However, there is another HTML site in the /app directory, which returns 404.

This was previously working. I've not changed any config files.

Any ideas?

System.Net.WebException: There was an error downloading [URL] The underlying connection was closed: An unexpected error occured on send

Posted: 12 Apr 2021 06:43 PM PDT

I have an asmx web application installed on IIS 8.5 on windows server 2012. When i try to load it via WSDL client on the server itself, i get the following error:

The application is running under a .net 2.0 app pool but the same error occurs while running under .net 4.0

enter image description here

The error occurs if i use the full URL (which looks like: https://web.site.com/app/test.asmx) and also when i use https://localhost/app/test.asmx.

Any help would be appreciated

Postfix, Dovecot and Spamassassin unexpectedely fill-ups my disk usage

Posted: 12 Apr 2021 10:03 PM PDT

I am on VPS using CentOS 7, LAMP using Postfix, Dovecot and Spamassassin with Rainloop as my email client. When I have started Postfix using:

systemctl enable postfix    systemctl restart postfix    

and Dovecot as:

systemctl restart dovecot    systemctl enable dovecot      

After that my CPU usage goes above 90-99% as well my disk usage start fill-up unexpectedely as well as I am only able to send email and not being able to receive emails. Here is some outputs when I am running this command:

    [root@server ~]# postconf -nf  postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_sender_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_client_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_helo_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_sender_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_client_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_helo_restrictions    postconf: warning: /etc/postfix/main.cf: undefined parameter: virtual_mailbox_limit_maps    alias_database = hash:/etc/aliases    alias_maps = hash:/etc/aliases    broken_sasl_auth_clients = yes    command_directory = /usr/sbin    daemon_directory = /usr/libexec/postfix    data_directory = /var/lib/postfix    debug_peer_level = 2    debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd        $daemon_directory/$process_name $process_id & sleep 5    dovecot_destination_recipient_limit = 1    header_checks = regexp:/etc/postfix/header_checks    html_directory = no    inet_interfaces = all    inet_protocols = all    mail_owner = postfix    mailq_path = /usr/bin/mailq.postfix    manpage_directory = /usr/share/man    message_size_limit = 30720000    meta_directory = /etc/postfix    milter_default_action = accept    mydestination = localhost, localhost.localdomain    myhostname = mail.myhostname.com    mynetworks = 127.0.0.0/8    newaliases_path = /usr/bin/newaliases.postfix    non_smtpd_milters = $smtpd_milters    proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps        $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains        $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps        $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks        $virtual_mailbox_limit_maps    queue_directory = /var/spool/postfix    readme_directory = /usr/share/doc/postfix3-3.5.8/README_FILES    sample_directory = /usr/share/doc/postfix3-3.5.8/samples    sendmail_path = /usr/sbin/sendmail.postfix    setgid_group = postdrop    shlib_directory = /usr/lib/postfix    smtp_tls_security_level = may    smtpd_data_restrictions = check_policy_service unix:/var/log/policyServerSocket    smtpd_milters = inet:127.0.0.1:8891    smtpd_policy_service_default_action = DUNNO    smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated,        reject_unauth_destination    smtpd_sasl_auth_enable = yes    smtpd_sasl_authenticated_header = yes    smtpd_sasl_path = private/auth    smtpd_sasl_type = dovecot    smtpd_tls_cert_file = /etc/pki/dovecot/certs/dovecot.pem    smtpd_tls_key_file = /etc/pki/dovecot/private/dovecot.pem    smtpd_use_tls = yes    tls_server_sni_maps = hash:/etc/postfix/vmail_ssl.map    unknown_local_recipient_reject_code = 550    virtual_alias_domains =    virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf,        mysql:/etc/postfix/mysql-virtual_email2email.cf    virtual_gid_maps = static:5000    virtual_mailbox_base = /home/vmail    virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf    virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf    virtual_transport = dovecot    virtual_uid_maps = static:5000    postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_create_maildirsize=yes    postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_maildir_extended=yes    

AS well as when running:

    [root@server ~]# postconf -Mf  postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_sender_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_client_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_helo_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_sender_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_client_restrictions    postconf: warning: /etc/postfix/master.cf: undefined parameter: mua_helo_restrictions    postconf: warning: /etc/postfix/main.cf: undefined parameter: virtual_mailbox_limit_maps    smtp       inet  n       -       n       -       -       smtpd        -o content_filter=spamassassin    submission inet  n       -       n       -       -       smtpd        -o syslog_name=postfix/submission        -o smtpd_tls_security_level=encrypt        -o smtpd_sasl_auth_enable=yes        -o smtpd_reject_unlisted_recipient=no        -o smtpd_client_restrictions=$mua_client_restrictions        -o smtpd_helo_restrictions=$mua_helo_restrictions        -o smtpd_sender_restrictions=$mua_sender_restrictions        -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject        -o milter_macro_daemon_name=ORIGINATING    smtps      inet  n       -       n       -       -       smtpd        -o syslog_name=postfix/smtps        -o smtpd_tls_wrappermode=yes        -o smtpd_sasl_auth_enable=yes        -o smtpd_reject_unlisted_recipient=no        -o smtpd_client_restrictions=$mua_client_restrictions        -o smtpd_helo_restrictions=$mua_helo_restrictions        -o smtpd_sender_restrictions=$mua_sender_restrictions        -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject        -o milter_macro_daemon_name=ORIGINATING    pickup     unix  n       -       n       60      1       pickup    cleanup    unix  n       -       n       -       0       cleanup    qmgr       unix  n       -       n       300     1       qmgr    tlsmgr     unix  -       -       n       1000?   1       tlsmgr    rewrite    unix  -       -       n       -       -       trivial-rewrite    bounce     unix  -       -       n       -       0       bounce    defer      unix  -       -       n       -       0       bounce    trace      unix  -       -       n       -       0       bounce    verify     unix  -       -       n       -       1       verify    flush      unix  n       -       n       1000?   0       flush    proxymap   unix  -       -       n       -       -       proxymap    proxywrite unix  -       -       n       -       1       proxymap    smtp       unix  -       -       n       -       -       smtp    relay      unix  -       -       n       -       -       smtp    showq      unix  n       -       n       -       -       showq    error      unix  -       -       n       -       -       error    retry      unix  -       -       n       -       -       error    discard    unix  -       -       n       -       -       discard    local      unix  -       n       n       -       -       local    virtual    unix  -       n       n       -       -       virtual    lmtp       unix  -       -       n       -       -       lmtp    anvil      unix  -       -       n       -       1       anvil    scache     unix  -       -       n       -       1       scache    dovecot    unix  -       n       n       -       -       pipe flags=DRhu        user=vmail:vmail argv=/usr/libexec/dovecot/deliver -f ${sender} -d        ${recipient}    spamassassin unix -      n       n       -       -       pipe flags=DROhu        user=vmail:vmail argv=/usr/bin/spamc -f -e /usr/libexec/dovecot/deliver -f        ${sender} -d ${user}@${nexthop}    spamassassin unix -      n       n       -       -       pipe flags=R        user=spamd argv=/usr/bin/spamc -e /usr/sbin/sendmail -oi -f ${sender}        ${recipient}    

Finally, when I stopped Postfix and Dovecot then my Disk Usage stops fill-ups but when I again start Postfix and Dovecot then again Disk usage start fill-ups.
I am appreciating for any help to fix this issue; if something more needed to analyze this issue then I can shows at here.
Thanks

While a process is running manually, it stops after a while in the cron job

Posted: 12 Apr 2021 06:16 PM PDT

I have a PHP script and this script loops thousands of lines. When I run the PHP file manually, this loop of thousands of lines returns successfully. But when a cron job does this, it turns around 143 times.

What have I checked:

  • I checked if there was something to stop my code for the 143rd time.
  • I checked if this is a timeout. Even if I extend the time, it stops after the 143rd turn.

What have I tried?:

  • I thought it might be a memory issue. and I increased the memory_limit. - Not working

Edit: Even though I didn't make any changes, the situation improved. I need to know the reason for this. Please give your estimates and the measures I can take.

Thank you in advance for your valuable thoughts.

ESX host cmd to check if physical host went to battery

Posted: 12 Apr 2021 06:10 PM PDT

Trying to trouble shoot some issues on a server and see if it's related to loss of power, they have cheap APC that don't have logging or interface. I've checked the event log in v-sphere but don't see any events related to power loss. Wondering if there a command in the esx-cli or v-sphere to determines if the host server went from AC power to APC ?

Load Balancing DNS with Google Cloud Platform

Posted: 12 Apr 2021 08:01 PM PDT

I plan to achieve load balancing by using Google to balance NS/DNS between each of three servers.

I am setting up three servers with cluster DNS, records are replicated between each server.

I plan to setup NS1/2.example to point towards Google's Load Balancer (Anycast) instead of pointing NS1/2.example to each individual server.

How could I achieve that? What should I be aware of?

Linux module load

Posted: 12 Apr 2021 08:07 PM PDT

I have one question one of my Linux box which is RHEL 7.8 the module joydev is loaded on one server but not on the other server.

Server A => Module joydev is loaded successfully
Server b => Module joydev is not loaded

I know I can load the module on server B but looking for root cause when both the system installation is some why the module loaded on one server but not on another.

Connect AWS SSL certificate to Intercom Articles

Posted: 12 Apr 2021 06:02 PM PDT

We have our support articles in Intercom Articles and need to attach a SSL certificate to the subdomain support.packaly.com. Intercom does not offer its own certificate so we need to do it through AWS. The complication is that we do not have our domain at AWS but at Google Domains.

We want to make the sub domain have a SSL certificate, and redirect it to Intercom help center. This is the article (video) that we use but the one difference is that we do not have the domain at AWS:

https://developers.intercom.com/installing-intercom/docs/set-up-your-custom-domain#section-how-to-configure-ssl-with-aws

Is there a workaround for connecting a AWS certificate for SSL with a Google Domains subdomain and redirecting it to the help center?

DNSSEC enable and lookaside

Posted: 12 Apr 2021 07:01 PM PDT

I came across a Bind setup where there is only one DNSSEC value set like this:

dnssec-validation yes;    

and the keys in named.conf.options are declared like this:

include "/etc/bind.keys"  

However, the rest of it of:

dnssec-enable yes;  dnssec-lookaside auto;  

is not set anywhere at all.

Now the question is does this setup work at all? I do not see any errors anywhere. Would appreciate any comments / suggestions / advices at all. Many thanks in advance!

AltRecipient AD attribute on mail enabled Public Folder cannot be synchronized in hybrid environment with O365

Posted: 12 Apr 2021 09:06 PM PDT

We have a hyrbid environment setup between Exchange 2010 and O365 for both mailboxes and Public Folders. Since putting Public Folders in hyrbid mode (through use of https://docs.microsoft.com/en-us/exchange/collaboration-exo/public-folders/set-up-legacy-hybrid-public-folders ) we keep getting reports every export cycle containing the below for each mail enabled Public Folders:

The reference attribute [AltRecipient] could not be updated in Azure Active Directory. Remove the reference [PublicFolder] in your local Active Directory directory service.  

Does anyone know why Azure has an issue with the AD attribute that stores forwarding address', which I understand once the Public Folders are migrated can have this functionality enabled?

Waiting for localhost : getting this message on all browsers

Posted: 12 Apr 2021 09:06 PM PDT

I am using Ubuntu 14.04 and have php5 and mysql installed. I have 3 web applications on my /var/www/html folder. Until yesterday evening I was able to test and work on the applications. All of a sudden, I am not able to load any of my applications on any of the browsers. I have firefox and chrome installed.

I have checked the availability of MySQL and Apache. Both are running correctly. I have also restarted Apache. I have cleared all the cookies and history from chrome and set it to default under chrome://flags.

After removing all the history and cookies from Chrome, I could load the first login page and when I provide the UID and password, I get Waiting for localhost and the page is stalled.

Of the three one of my smaller application loaded after 10 minutes, however a heavier application did not load at all. However, the browser loads plain html files.

I have also tested on wifi, mobile internet dongle device and ethernet and there are no firewall issues. I have also cleared my machine's cache by

sudo /etc/init.d/dns-clean restart  

None of this helped. Can someone guide me on how do I resolve this?

Azure AD Connect Single-Sign On

Posted: 12 Apr 2021 08:02 PM PDT

I am trying to set up my domain for Single Sign-On to Azure-Connected services (Primarily, SharePoint Online). I have already run through the setup for Azure AD Connect and am currently able to synchronize my directory to Azure. I see my users in Azure and can sign in using an account. The next logical step for us is to enable Single Sign-On, so that our users are able to connect easier (our users are actually located on a subdomain, which is transparent to them and does not completely match their email addresses). Problem is, during the setup of AD Connect, the option to Enable Single Sign-On was not available. It simply was not on the normal User Sign-In prompt during setup. Has anyone else seen this, or am I simply missing something?

Sometimes: Unable to connect to host 127.0.0.1, or the request timed out. MySQL through Sequel PRO

Posted: 12 Apr 2021 08:02 PM PDT

I have been struggling with this issue for over a year now, and it's really giving me a headache.

I often experience I am unable to connect to the MySQL server through SequelPRO. If I ssh into the server, I can use mysql fine, see processes, etc. My web app works fine too.

When I try to SSH into my MySQL database through Sequel PRO, this message appears instantly:


Unable to connect to host 127.0.0.1, or the request timed out. Be sure that the address is correct and that you have the necessary privileges, or try increasing the connection timeout (currently 10 seconds). MySQL said: Lost connection to MySQL server at 'reading initial communication packet', system error: 0


ONLY solution is to reboot the server. Sometimes I'd reboot the server, and it still won't work. After a few reboots it works. But usually it works every time.

  • It happens on all my different forge servers (php5 & php7) and has happened since day one.
  • Restarting the mysql server (like sudo service restart mysql) does not work
  • It happens on different networks (wifi, local, etc)
  • I can connect fine from another mac with different SSH-key (same OSX and Sequel Pro build). I have even tried copying my own SSH key to the other computer, and logging on through that. That works fine as well.
  • I happens at random times, often if my Sequel Pro was open when my mac when to sleep (but not always - sometimes I can open it 24 hours later, and still be connected). But all of a sudden, I'd be disconnected, and when I try to login again, it see the following error:
  • In some situations, I can login again to MySQL through sequel pro, even though I did not do anything (i.e. reboot server).

The way I connect:

MySQL Host: 127.0.0.1 Username: something Password: something Port: 3306 SSH Host: server-ip SSH User: something SSH Key: path to my id_rsa SSG Port: default/not-set

Any ideas?

My Sequel Pro version: v1.1 build 4499 My OSX: OS X El Capitan v 10.11

Server: Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-71-generic x86_64)

MySQL: Ver 14.14 Distrib 5.7.10, for Linux (x86_64) using EditLine wrapper

exchange server with different domain name

Posted: 12 Apr 2021 06:01 PM PDT

We have a Domain Controller with the name of example.com (unregistered domain name), all hosts connected to domain and I want to bring up a exchange server.

This exchange server is a member of example.com and I have a public domain name abc.com.

What I want: if any one (in local or public) sends an email the address format should be like user1@abc.com not like user1@example.com.

Please give me any suggestion.

End of script output before headers: php5

Posted: 12 Apr 2021 06:01 PM PDT

I've inherited the sysadmin role on a server that's running a wordpress website on top of Apache 2.4/Debian. It almost works, but it is issuing "500 internal server error" from time to time. In my error.log file I see:

End of script output before headers: php5, referer: http://www.xxxxxxx.xxx/wp-admin/post-new.php  

I think the server is running mod_fcgid system wide, since I have

/etc/apache2/conf-enabled/fcgid.conf   

with the following contents:

<Location />    AddHandler fcgid-script .php    Options +ExecCGI +FollowSymLinks    FcgidWrapper /usr/bin/php-cgi .php  </Location>    <Files ~ (\.php)>      AddHandler fcgid-script .php      FCGIWrapper /usr/lib/cgi-bin/php5 .php      Options  ExecCGI FollowSymLinks      allow from all   </Files>  

and I've found other questions, here and here, that are about the same error I get and that cite some mod_fcgid misconfiguration as the possible cause (wrong values in PHP_FCGI_CHILDREN and PHP_FCGI_MAX_REQUESTS variables). A reply in this forum also suggests a fcgid misconfiguration and seems to give more details about the problem (a possible bottleneck in number of accepted threads/connections), but it lacks a step-by-step explanation of what to do.

I'm no mod_fcgid expert. Can you help me understand where and how I should set the correct values for PHP_FCGI_CHILDREN and PHP_FCGI_MAX_REQUESTS variables?

Zabbix PDF Report Generation

Posted: 12 Apr 2021 10:03 PM PDT

if any of you have an idea how to implement zabbix generate PDF reports? in forum I found some like this: https://www.zabbix.com/forum/showthread.php?t=24998 .

I try implement this to my zabbix 2.2.3 but when I do this I still see Text (Unable to login:) I guess it is a problem version. as you can see it was only tested on 1.8.8 and 1.8.10. Does anyone have some idea?


One problem fixed, is that it is failing due to API version issues, I download fresh from http://zabbixapi.confirm.ch/ and now I can generate PDF report, However, when I have select some site I see only ALL option and PDF is are empty

Below I share screen and example report: http://pl.scribd.com/doc/237807238/file-1

enter image description here

Anybody have some idea ?

Regards Mick

VMM 2012 Error 20552 - For ISO share VMM does not have appropriate permissions to access the resource

Posted: 12 Apr 2021 10:03 PM PDT

I have included an ISO network share in my VMM 2012 library by:

  1. Library servers -> Add Library Share -> Add Unmanaged Share.
  2. I then selected the file share e.g \fs1\ISO
  3. I set the share permissions on \fs1\ISO to everyone FULL
  4. I set the NTFS permissions to read-only for the following AD accounts:
    • VMM service account
    • VMM Library account
    • HV target host machine account
    • Network service

The problem I have is I still get error the following error regarding permissions:

Error (20552) VMM does not have appropriate permissions to access the resource \\fs1.domain.local\ISO\Zabbix_2.0_x86.i686-0.0.1.preload.iso on the scvmma1.domain.local server.

Ensure that Virtual Machine Manager has the appropriate rights to perform this action. Also, verify that CredSSP authentication is currently enabled on the service configuration of the target computer scvmma1.domain.local. To enable the CredSSP on the service configuration of the target computer, run the following command from an elevated command line: winrm set winrm/config/service/auth @{CredSSP="true"}

I have also set the command on the VMM server winrm set winrm/config/service/auth @{CredSSP="true"} but no joy.

Any ideas please?

Outlook 2010 "Cannot open this item" on Windows 7 64-bit

Posted: 12 Apr 2021 07:01 PM PDT

I have to admit this has stumped me...

User's Workstation

  • Outlook 2010 (32-bit) w/ Cached Exchange Mode enabled
  • Windows 7 Pro (64-bit)

Email account is on Exchange 2003

Problem

The user is unable to open certain emails in Outlook on this computer. Error msg is "Cannot open this item". The same user has a laptop with Outlook 2010 (32-bit) and Windows 7 Pro (32-bit). On his laptop he CAN open these emails without any problems. So to me that says this is a bug with Windows 7 Pro (64-bit). He can also open these emails on his BlackBerry.

Things I've tried to fix this problem...

  1. Recreate his Outlook profile from scratch
  2. Recreate his Windows user profile from scratch
  3. Reinstall Office 2010 from scratch
  4. Move his Exchange mailbox to a different storage group on the server
  5. Installed a Microsoft Hotfix that supposedly fixes the problem (it did not)

Strange thing is - most of the emails he cannot open were emails sent to him from a BlackBerry within the organization. Coincidence?

Any help is greatly appreciated!

/etc/hosts entry for single IP server serving multiple domains

Posted: 12 Apr 2021 09:29 PM PDT

Running Ubuntu 10.04

My server serves 3 different domains using named virtual hosts in Apache2. I'm current using different Named Virtual Servers to 301 redirect www to the non-www equivalent. It's working, but I don't understand the correct entries for my /etc/hosts file and I think that is causing problems for me trying to setup Varnish.

I understand I need the localhost line

127.0.0.1       localhost localhost.localdomain  

Should I also list each domain here? as in

127.0.0.1       localhost localhost.localdomain example1.com example2.com example3.com  

What about the entry for the IP of the server? Do I need the following line?

< IP.Of.Server >      example1.com example2.com example3.com  

Also, should I be listing www.example.com AND example.com on each line, so they go into Apache and it can deal with the 301 redir?

protocol version mismatch -- is your shell clean?

Posted: 12 Apr 2021 10:22 PM PDT

When following the instructions to do rsync backups given here: http://troy.jdmz.net/rsync/index.html

I get the error "protocol version mismatch -- is your shell clean?"

I read somewhere that I needed to silence the prompt (PS1="") and motd (.hushlogin) displays to deal with this. I have done this, the prompt and login banner (MOTD) not longer appear, but the error still appears when I run:

rsync -avvvz -e "ssh -i /home/thisuser/cron/thishost-rsync-key" remoteuser@remotehost:/remote/dir /this/dir/  

Both ssh client and sshd server are using version 2 of the protocol.

What could be the problem? Thanks.

[EDIT] I have found http://www.eng.cam.ac.uk/help/jpmg/ssh/authorized_keys_howto.html which directs that it is sometimes necessary to "Force v2 by using the -2 flag to ssh or slogin

 ssh -2 -i ~/.ssh/my_private_key remotemachine"  

It is not clear this solved the problem as I think I put this change in AFTER the error changed but the fact is the error has evolved to something else. I'll update this when I learn more. And I will certainly try the suggestion to run this in an emacs shell - thank you.

No comments:

Post a Comment