Tuesday, March 1, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


How do I create new organization with new domain apart from existing one in google cloud

Posted: 01 Mar 2022 01:37 AM PST

Due to my company's split-up, I want to create new organization so that the billing destinations among two companies can be separated in google cloud. But I'm not sure where/how I can do that. Since I'm just a beginner at google cloud, any fundamental information would be appreciated.

Configuring NGINX with proxy_pass set up on a subdirectory pointing to a docker server, and ensuring relative URLs (.js, .css) resolve

Posted: 01 Mar 2022 01:35 AM PST

I have nginx 1.14.0 running on Ubuntu 18.04 server. On that server, I'm attempting to self-host many different applications. My goal is to have each location exist at a subdirectory of my url, server.calebjay.com.

For example, right now I'd like to set up pigallery2 to be available at server.calebjay.com/photos. To do so, I have a docker instance serving on port 800, and I have nginx proxying to it. This partially works, insomuch as index.html loads.

However, relative urls, such as script src, aren't resolving, I believe because they're formed like main.js instead of /photos/main.js.

To test, I can GET https://server.calebjay.com/photos, and resolve an index.html. I get 404s for a lot of .js and .css files. Confirming, if I grab those relative URLs, and do https://server.calebjay.com/photos/main-asdfasdf.js, I still get a 404, {server-ip-address}/photos/main-asdf.js and https://server.calebjay.com/photos/main-asdf.js both properly return the given JS file.

There are many answers regarding this, however none have worked for me.

My baseline nginx config:

/etc/nginx/nginx.conf

user www-data;  worker_processes auto;  pid /run/nginx.pid;  include /etc/nginx/modules-enabled/*.conf;    events {          worker_connections 768;  }    http {          sendfile on;          tcp_nopush on;          tcp_nodelay on;          keepalive_timeout 65;          types_hash_max_size 2048;            include /etc/nginx/mime.types;          default_type application/octet-stream;            ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE          ssl_prefer_server_ciphers on;            access_log /var/log/nginx/access.log;          error_log /var/log/nginx/error.log;            gzip on;            include /etc/nginx/conf.d/*.conf;          include /etc/nginx/sites-enabled/*;  }  

For the subdomain and single docker server to which I'm proxying for now:

/etc/nginx/sites-available/server.calebjay.com.conf

   server {          listen 80 default_server;          listen [::]:80 default_server;            server_name server.calebjay.com www.server.calebjay.com;          return 301 https://$server_name$request_uri;    }      server {      server_name server.calebjay.com;         gzip on;      #location ~ \.css {  #    add_header  Content-Type    text/css;  #}  #location ~ \.js {  #    add_header  Content-Type    application/x-javascript;  #}    #location / {   # if ($http_referer ~ "^https?://[^/]+/photos/") {   #     rewrite ^/(.*) https://$http_host/photos/$1 redirect;   # }  #    if ($http_referer = "https://server.calebjay.com/photos/") {  #        rewrite ^/(.*) https://server.calebjay.com/photos/$1 redirect;  #    }  #}       location /photos/ {      # rewrite ^/photos(/.*)$ $1 break;        proxy_pass http://localhost:800/;        proxy_http_version 1.1;        proxy_set_header Upgrade $http_upgrade;        proxy_set_header Connection 'upgrade';       # sub_filter "<head>" "<head><base href=\"${scheme}://${host}/photos\">";        proxy_set_header Host $host;        proxy_cache_bypass $http_upgrade;      }          listen 443 ssl default_server;      listen [::]:443 ssl default_server;        ssl_certificate /etc/letsencrypt/live/server.calebjay.com/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/server.calebjay.com/privkey.pem;      include /etc/letsencrypt/options-ssl-nginx.conf;    }  

Each of the commented out portions are separate experiments I've tried from various places on the stack network:

Neither rewrite based on http-referrer worked, though one image did resolve as a result.

Having an explicit rule for images nor adding a mime-type header worked.

Answers regarding static content and try_files didn't work, nor should they I believe, as I'm proxying to a server.

Replacing links using sub_filter didn't work.

Setting location as /photos instead of /photos/ didn't work.

I don't have access to the docker internals, so can't modify the html directly.

How can I get my hrefs to resolve against the proper domain, with the subdirectory of /photos/?

(I did restart nginx after every config change)

Further details:

nginx -V

nginx version: nginx/1.14.0 (Ubuntu)  built with OpenSSL 1.1.1  11 Sep 2018  TLS SNI support enabled  configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-H4cN7P/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module  

2012 R2 rebooting with no events related, VSS?

Posted: 01 Mar 2022 01:24 AM PST

I have a production 2012R2 virtualized server (previously Xenserver, migrated to xcp-ng, 8.0 currently) that has been running fine for 4 years. Recently, the system tried to update Xen drivers and it failed, making the server sloooow and unusable. I was advised to migrate Xen drivers to XCP-np drivers as my system was not accepting Xen Drivers anymore. This worked and the system was again up and running fast. Nevertheless, I have been experiencing sudden reboots (once per day or couple of days) but cannot find any clue on the system logs to what is causing it. The last event I can find when the shutdown happened is 7036 related to VSS but this seems to be a normal event in the service life.

How can I debug the problem down to its root to find out what is causing it?

libvirtd eating up to 100% CPU with no apparent reason

Posted: 01 Mar 2022 01:09 AM PST

Few days ago libvirt has started to behave erratically on my laptop. It consumes a high amount of CPU without any apparent reason. ALL my VMs are shutoff, why is libvirt using up to 100% CPU if no VMs are running?

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                              203328 root      20   0 1640992  37056  22308 R  86,1   0,1  22:57.03 libvirtd                                                         

Killing the process makes my laptop happy. Problem comes back as soon as the process is started again. No idea how to debug or fix this, any help is welcome.

Using up to date Ubuntu 21.10, kernel 5.13.0-30-generic.

How to install Application Server role on Windows Server 2019

Posted: 01 Mar 2022 01:08 AM PST

We are migrating our web application from 2012 Windows Server (with SQL 2012) to 2019 Windows server (with SQL 2019).

In 2012 R2 w used to install the "Application Server" role.

"Application Server" role is deprecated from 2016 Windows server. But we are struggling to get its substitutes in windows 2019 server.

The problem mentioned in the below discussion specifically targeting the below items only

AS-Incoming-Trans AS-Outgoing-Trans AS-HTTP-Activation AS-Web-Support AS-WAS-Support

How to install Application Server role on Windows Server 2016

We are looking for all the below items replacements/substitutes as our application is dependent on them:

[X] Application Server Application-Server
[X] .NET Framework 4.5 AS-NET-Framework
[X] COM+ Network Access AS-Ent-Services
[X] Distributed Transactions AS-Dist-Transaction
[X] WS-Atomic Transactions AS-WS-Atomic
[X] Incoming Network Transactions AS-Incoming-Trans
[X] Outgoing Network Transactions AS-Outgoing-Trans
[X] TCP Port Sharing AS-TCP-Port-Sharing
[X] Web Server (IIS) Support AS-Web-Support
[X] Windows Process Activation Service Support AS-WAS-Support
[X] HTTP Activation AS-HTTP-Activation
[X] Message Queuing Activation AS-MSMQ-Activation
[X] Named Pipes Activation AS-Named-Pipes
[X] TCP Activation AS-TCP-Activation

Out of the above items most concerning things are below one as we are struggling to information.

[X] COM+ Network Access AS-Ent-Services
[X] Distributed Transactions AS-Dist-Transaction
[X] WS-Atomic Transactions AS-WS-Atomic
[X] Incoming Network Transactions AS-Incoming-Trans
[X] Outgoing Network Transactions AS-Outgoing-Trans
[X] Web Server (IIS) Support AS-Web-Support

****Regarding COM+ [X] COM+ Network Access AS-Ent-Services

Regarding COM+ it is suggested to execute a power shell which will change one firewall setting and after that we need to change on registry entry. Corresponding link information is given below.

https://www.jorgebernhardt.com/how-to-enable-com-in-windows-server-2016/

But we noticed that in 2012 OS. if we enable "COM+ Network Access" role it is not changing the firewall setting but yes it is changing the mentioned registry entry value.

So, we are unable to understand that in 2012 OS, by enabling "COM+ Network Access" role apart from changing the registry entry value what else it is changing in the OS. If we get that information, we can change those settings by manually executing scripts with required changes.

Question: Is there any way to understand what by enabling "COM+ Network Access" exactly what changes are happening at the OS level so that we can do it manually or through scripts?

****Regarding Distributed Transactions

[X] Distributed Transactions AS-Dist-Transaction
[X] WS-Atomic Transactions AS-WS-Atomic
[X] Incoming Network Transactions AS-Incoming-Trans
[X] Outgoing Network Transactions AS-Outgoing-Trans

Here we are struggling to information on "WS-Atomic Transactions" and what exactly it does in 2012 OS. From the information we collected it looks like it is enabling following thing.

WsatConfig.exe –network:enable

but this requires certificate which is mandatory field.

But if we enable "WS-Atomic Transaction" using below PowerShell command

Add-WindowsFeature AS-WS-Atomic

it is enabling WsatConfig but without any certificates which is a mandatory field.

Question: How to get the details on "WS-Atomic Transactions" role and what is doing in 2012 OS so that we can do same thing in 2019 OS.

Regarding following items.

[X] Incoming Network Transactions AS-Incoming-Trans
[X] Outgoing Network Transactions AS-Outgoing-Trans

We are using following command in 2019 OS. We are assuming that the above items are doing the following things only in 2012 OS. Please correct me if I am wrong.

C:\Windows\system32> set-dtcnetworksetting -inboundtransactionsenabled $true -outboundtransactionsenabled -remoteclientaccessenabled $true

For rest of the roles, we found a way.

Deploy a PHP application that calls python scripts on AWS

Posted: 01 Mar 2022 01:06 AM PST

I've made a PHP site that calls a bunch of python workers in the background (using the queue-worker system in laravel, where the worker calls the python tool via CLI). The python tools each have their own conda environment setup they need to run correctly.

I can run this setup quite well on a single server, because everything is installed on the machine, but I want to deploy this to the cloud (AWS) in a robust manner.

From what I read, Elastic Beanstalk is quite nice and easy to distribute PHP code, with support for deploying new versions and so on, but I cannot see how I could include my python code.

I should probably look into:

  • separating my python tools from the PHP server (with the new problem of "how do I call them then, and wait for their results?")
  • putting everything in a docker container, and rebuild that every time one of the tools or PHP needs an update (with the new problem of "how do I make it redundant")

What is your wisdom on deploying this kind of setup?

Exchange Online: move some messages back from online archive

Posted: 28 Feb 2022 11:32 PM PST

Due to a mistake on a retention policy, too many (too recent) messages of user mailboxes were moved to online archive. I know that MS offers no wayback for online archive. I investigated some options:

  1. Deleting the archive and importing it back to the primary mailbox: I can't do, because some mailboxes were completely full at the time of archiving and now there's no room for merging
  2. Exporting archive to PST; reimport it to primary mailbox, filtered by date; delete and recreate archive; reimport the remaining of PST to archive: may work but it's a huge effort
  3. In an old post I found that there was a PS script, leveraging EWS, that was able to move selected items to archive. While wondering if it could do the reverse path, I found out that the script is no longer present in Powershell Gallery.

Other ideas?

webhook MS Teams integration with Prometheus - request failed

Posted: 01 Mar 2022 12:19 AM PST

I'm struggling with Microsoft Teams/Prometheus integration on K8s cluster. I used helm to start all components. I have correctly working Prometheus and Alertmanager. It seems that all works fine. Prometheus communicate with Alertmanager. Then prometheus-msteams receives POST alert from Alert Manager and it should send it to a Microsoft Teams Channel but it's not.

2022/03/01 06:49:38 [DEBUG] POST https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx  2022/03/01 06:50:08 [ERR] POST https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx request failed: Post https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx: dial tcp 42.12.12.542:443: i/o timeout  

30s and timeout. I thought that it may be a proxy issue. So I added extraEnv parameter to config map, restarted pod but nothing changed. So my configuration looks like:

apiVersion: v1  data:    connectors.yaml: |      connectors:        - alertmanager-warning: https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx        - alertmanager-critical: https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx        extraEnvs:        HTTPS_PROXY: http://my-proxy.com:911  kind: ConfigMap  metadata:  

I also loged into container to check if /etc/config/connectors.yaml is OK. I'm afraid that this extraEnvs doesn't work somehow. From K8s worker node I tried manually (with curl) post some test json to MS Chanel and without proxy it hanged. When I exported HTTPS_PROXY var the message was sucessfully created in MS Teams Chanel.

 export HTTPS_PROXY=http://my-proxy.com:911   curl -X POST -d @test.json https://xxx.webhook.office.com/webhookb2/xxx-xxx-xxx/IncomingWebhook/xxx  

Do you have any idea what can cause the problem? Is this HTTPS_PROXY env should be listed when I type printvenv in prometheus-msteams container?

How do I configure EKS (Amazon Kubernetes) to use a different docker image repository?

Posted: 28 Feb 2022 11:06 PM PST

You'd expect a wuestion this simple would have an amazon tutorial or documentation, but I can't find any.

How do I configure an EKS cluster to connect to a different self hosted docker registry? I want to start running the open source version internally.

Thanks!

Query on Apache MPM Worker Module (mpm_worker.conf and worker.conf)

Posted: 28 Feb 2022 10:40 PM PST

We have Apache/2.4.18 (Ubuntu) of Apache on Ubuntu 16. We generally maintain our configuration in Puppet and use Worker MPM.

Yesterday we started facing connection drops on our application and port 443 started flapping. The error log of Apache was pointing to MPM.

We checked the mods-enabled directory and found that we have 2 files there, mpm_worker.conf and worker.conf. File worker.conf had meagre configuration and it seems it was overriding mpm_worker.conf. We disabled worker (a2dismod worker) so that mpm_worker.conf remains and values specified in that file take effect. After disabling worker and restarting, apache stabilized and started working normally.

I am not sure why there were 2 files for worker mpm. We also figured that this worker moduled was enabled by Puppet but not sure why it did so because already there was mpm_worker.conf file.

Trying to understand S3 Lifecycle rule

Posted: 28 Feb 2022 11:16 PM PST

I am working on an e-commerce website.

I store product photos in an s3 bucket. Once the product is deleted, I also delete the photos from s3 bucket.

I have S3 bucket versioning enabled. I am not entirely sure how does versioning wortk?

Here are my assumptions:

  1. If a product photo is modified, the old photo is kept with an old version (so the old photo is never deleted)
  2. If a photo is deleted, AWS still keeps the deleted photo however it is marked as deleted

Are the above assumptions correct?

Now I want create a Lifecycle rule to move the old photos (deleted or the old version of modified photos) to a cheaper storage.

From S3 Console, I choose Management > Create lifecycle rule. I can see the following options:

  • Move current versions of objects between storage classes
  • Move noncurrent versions of objects between storage classes
  • Expire current versions of objects
  • Permanently delete noncurrent versions of objects
  • Delete expired object delete markers or incomplete multipart uploads

I am not clear what does noncurrent version mean?

Is a deleted photo a noncurrent version? What about a product photo which remains active for a very long time (say 1 year) without being modified or deleted... does it ever become noncurrent, because it has been sitting in the bucket for too long?

I think the option that I want is this:

enter image description here

Does the above rule move deleted and modified photos to a cheaper storage, after 30 days?

Ports are shown as openedin firewall-cmd, but nmap scans shows they are closed

Posted: 28 Feb 2022 10:11 PM PST

I am using OpenSuse Leap 15.3, with vicibox v10. I have surfed vicidial fourms, but it seems to be an issue with OpenSuse. I have opened ports with firewall-cmd, following are the output.

vicibox10:~ # sudo firewall-cmd --list-all  public (active)    target: default    icmp-block-inversion: no    interfaces: eth0 home    sources:     services: apache2 apache2-ssl asterisk dhcpv6-client rtp ssh    ports: 10000-20000/udp 10000-20000/tcp 20001-25000/tcp 20001-25000/udp 5060-5062/tcp 5060-5062/udp    protocols:     forward: no    masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:   

Following is the nmap output:

vicibox10:~ # nmap -sU -p 10000 localhost  Starting Nmap 7.70 ( https://nmap.org ) at 2022-03-01 11:37 IST  Nmap scan report for localhost (127.0.0.1)  Host is up (0.000051s latency).  Other addresses for localhost (not scanned): ::1    PORT      STATE  SERVICE  10000/udp closed ndmp    Nmap done: 1 IP address (1 host up) scanned in 0.28 seconds    

As nmap scans of UDP are time consuming, hence I scan for random ports in the opened ports range.

I have been stuck with this since past 4 days. I have checked with some udp online scan website too, they are not accessible over WAN too. I need to access them over WAN.

Any help will be appreciated. Thanks

What ways can I connect my domain to a hosting provider?

Posted: 28 Feb 2022 09:58 PM PST

I tried to connect my domain to my hosting provider and they gave me 2 methods to do so. I can either use their name server or create 2 A Records. The problem is I already have an A Record set up to a different IP that I need and if I switch my name server than my domain provider won't let me use advanced DNS so my current A Record won't work. I tested both methods so I can confirm neither worked. Is there any other way I can link my domain to my hosting provider while still being able to use my current A Record? I already contacted my hosting provider and they said the only way to do it was through a name server or an A Record.

AWS CLI Usage Issue

Posted: 01 Mar 2022 01:15 AM PST

In our scenario, We previously had some AWS keys. The IAM interface show/showed no usage for it but the employee has been able to upload resources. Could anyone advise how to check if the interface is just erring or if they were perhaps not using these credentials? Is there a better way to find out this?

Only one TCP socket (via nc) able send data to the same host/port at once

Posted: 28 Feb 2022 11:59 PM PST

Simple repro - in one window watch processes in top, in the other run: nc -lkp 10000 > /dev/null & ( head -50000000 /dev/urandom | nc -N 127.0.0.1 10000 ) & ( head -50000000 /dev/urandom | nc -N 127.0.0.1 10000 )

Observe that only one head and nc process are actively using CPU.

Attach strace to the head that isn't active - see it's stalled on a write, e.g.:

strace: Process 589084 attached  write(1, "\264\347\270\26\27\24'BRb^\353\302\36@\216\17V\210*n\252`\353\330\351\276\2\250\330\350\217"..., 4096^Cstrace: Process 589084 detached   <detached ...>  

Set up two listeners on different ports - e.g. 10000 and 10001, and both go at full speed.

This is a simple example, but I can reproduce it with other inputs and outputs - e.g. zcatting large files and sending them to production services over the network. It's not to do with the input, and it's not to do with the listening socket.

So - why can I can only have one tcp connection to any given host/port actively sending data?

There is an independent data source (feel free to experiment if you don't believe me), and an independent process opening its own tcp connection (netstat will show them both open) - the only thing in common is the destination (which doesn't have to be an nc listening on lo - happens to anything).

Given the destination can definitely have multiple incoming sockets receiving data at once, and the source can definitely send data down multiple network sockets at once, I'm struggling to figure out where the contention is coming from, causing only one pipe to be active at once.

What's the difference between a "degraded" RAID6 array and a "clean" RAID5 array?

Posted: 01 Mar 2022 12:47 AM PST

Suppose you have two RAID arrays, one with N disks and one with N+1 disks. The array with N disks was formatted as a RAID5 and left alone, while the other array was formatted as a RAID6 before one of its disks was removed. Now both arrays have N disks, N-1 disks worth of usable storage, and can survive the loss of one (more) disk.

Besides whatever metadata the RAID controller uses, are there any differences between these two arrays (in terms of data layout, performance, reliability)? Could I convert a RAID6 array with one disk missing to a RAID5 of one less expected disk with minimal "reshaping"/"rewriting"?

I know that there are different "policies"/"alignments" within RAID5/6, but that's probably beyond the scope of this question. Perhaps it should be assumed that both arrays use a policy that is common to both RAID levels.

Openstack nova terminating guest vms with oom_kill

Posted: 01 Mar 2022 01:24 AM PST

I am running an openstack Victoria with Kolla ansible deployment , all components are containerised .

The compute node is (oom_kill) killing guest when the memory is max out , is there a way to avoid it like in other hypervisors it works fine without this issue . I am using Centos 8.3 . Please let me know if there is a way to avoid this .

Errors :

**Feb 27 12:18:15 server1 kernel: neutron-openvsw invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0  Feb 27 12:18:15 server1 kernel: oom_kill_process.cold.28+0xb/0x10  Feb 27 12:18:15 server1 kernel: [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name  Feb 27 12:18:15 server1 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=395bde13c7e0570ef36df008bc028d8701fd76c1b56e2a56afaf254fd53d0043,mems_allowed=0-1,global_oom,task_memcg=/machine/qemu-33-instance-000000dc.libvirt-qemu,task=qemu-kvm,pid=2301214,uid=42436  Feb 27 12:18:17 server1 kernel: oom_reaper: reaped process 2301214 (qemu-kvm), now anon-rss:0kB, file-rss:516kB, shmem-rss:0kB**  

sar memory utilisation

==================================  10:10:05 AM kbmemfree   kbavail kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty  12:00:05 PM    877228         0 393690660     99.78         0    500284 2254123104    542.46 374227828  12705256         0  12:10:04 PM    866416         0 393701472     99.78         0    501844 2254259520    542.49 374233440  12704360         0  12:20:04 PM 301182096 300028052  93385792     23.67         0    705140 1938778932    466.57  83794716   5028804         8  12:30:04 PM 301085624 299970968  93482264     23.69         0    779220 1939000988      

How do I sign my own SSL certificates so that they cover www and non-www domain names?

Posted: 01 Mar 2022 12:07 AM PST

When using a commercial Certificate Authority, generating a csr for the common name www.mysite.com and sending it to them will result in a certificate being issued that works for both www.mysite.com and mysite.com.

The signing request is a single name request- just www.mysite.com, so nothing special happens at the csr level:

openssl genrsa -des3 -out mysite.com.key 4096    openssl req -new -key mysite.com.key -out mysite.com.csr  common name, ie your name: www.mysite.com  

But what comes back from the commercial CA is a certificate that works on both www and non-www.

Question: How can I take a csr that is just for www.mysite.com and, using openssl with my own certificate authority, issue a certificate that works for both www.mysite.com and mysite.com, just like the commercial companies do?

I know you can modify the csr to add multiple domains with a config file, but only the www version is needed in the csr when using a commercial company. No multi-domain config files are necessary.

Are the commercial CAs modifying the submitted csr to include both versions? Or is there a flag in the signing command that makes the www optional?

Can I modify this command to add both www and non-www versions, without changing the csr?

openssl x509 -req -days 365 -in mysite.com.csr -CA Authority.crt -CAkey Authority.key -set_serial 12345 -out mysite.com.crt  

Or is there a simple way to add a second domain to a csr without a config file?

openssl req -new -key mysite.com.key -out mysite.com.csr  common name, ie your name: mysite.com, www.mysite.com  

Error 500 When Accessing Yammer Through Proxy Server

Posted: 28 Feb 2022 10:18 PM PST

We have direct access enabled and web traffic is routed through a proxy server. Can someone help identify where I need to go with this issue where opening Yammer results in error 500.

When on the direct access network working remotely, Yammer is opened from the Office 365 portal in Chrome. It displays error 500 for the Nginx server with IP address 13.107.6.159. When I ran a whois lookup it looks like this is a Microsoft owned server.

It is true that emptying the cache in Chrome does sometimes work, but it always reverts back after a reboot.

I have been to the network tab in Chrome and it shows the following information:

Response Headers:    Connection: keep-alive  Content-Length: 572  Content-Type: text/html  Date: Tue, 19 Jan 2021 09:05:20 GMT  nel: {"report_to":"default","max_age":3600,"success_fraction": 0.001}  report-to: {"max_age":3600,"endpoints":[{"url":"https://mmay.nelreports.net/api/report?cat=yammer-prod_central_1"}]}  strict-transport-security: max-age=1234513412313; includeSubDomain  Via: 1.1 hosted.websense 13lonb  X-Bst-Info: t=1611047121,h=13lonb,p=29191_51738:2_12031,u=757195352,c=25124,c=100199,v=7.11.74286.256  X-Bst-Request-Id: tvTDtj:KnZ:305750  x-lodbrok-cell: prod_central_1-c2  X-MSEdge-Ref: Ref A: 123DB7D71E3946A8B18FCA4BD1A85794 Ref B: LON21EDGE0513 Ref C: 2021-01-19T09:05:21Z  x-robots-tag: none    Request Headers:  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9  Accept-Encoding: gzip, deflate, br  Accept-Language: en-US,en;q=0.9  Cache-Control: no-cache  Connection: keep-alive  Host: www.yammer.com  Pragma: no-cache  Referer: https://www.office.com/  Sec-Fetch-Dest: document  Sec-Fetch-Mode: navigate  Sec-Fetch-Site: cross-site  Sec-Fetch-User: ?1  Upgrade-Insecure-Requests: 1  User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36  

I will attach a screen shot as well. enter image description here

I admit that I am not the best at troubleshooting network issues like this. Can someone please advise where I need to look to resolve this.

ansible-playbook --limit more than one host?

Posted: 01 Mar 2022 12:36 AM PST

For various reasons/limitations I cannot make new groups in the inventory file and need to use --limit/-l to specify the hosts.

I was told to do something like:

ansible-playbook -i /path/to/my/inventory/file.ini -l server.1.com server.2.com my-playbook.yml --check --diff

This was throwing an error:

ERROR! the playbook: server.2.com could not be found

From the Ansible Documentation on this subject I found that you could use a separate file to list all the hosts you want to limit. Something like:

ansible-playbook -i /path/to/my/inventory/file.ini -l @list-to-limit.txt my-playbook.yml

However, I need to do it all inline without creating an additional file.

How to find missing permission in MS SQL server

Posted: 01 Mar 2022 01:03 AM PST

Problem description: I am trying to use SQLDependency on a table of a commercial product (TAC Reservation Assistant). The DB is a large Microsoft SQL 2016 database on which we don't have db_owner rights (only TAC does).

I am now trying - together with support staff of TAC - to grant to a SQL-internal user the necessary rights to activate SQLDependency on this commercial database without granting our SQL user db_owner rights.

(With db_owner rights, this works perfectly without error - so our code is correct).

What we already tried: We so far have followed the valuable information on this site: http://keithelder.net/2009/01/20/sqldependency-and-sql-service-broker-permissions/ but because the tables which we want to observe with SQLDependency is in its own schema (tac instead of dbo), there is a permission missing on the schema because we get the following error in our c-sharp code:

Error message: Unhandled Exception occured while starting the WatcherService of Type Checkin. System.Data.SqlClient.SqlException (0x80131904): The specified schema name "tac" either does not exist or you do not have permission to use it.

Goal: As the above error message does not show us what exact right is missing, I would like to have a hint if there is somewhere a log within MS SQL server which exactly states what kind of right we don't have.

Does something like this exists?

With kind regards,

John

Installing Kubernetes on Ubuntu 18.04 LTS (with Docker) - fails on init

Posted: 28 Feb 2022 11:08 PM PST

I am attempting to install Kubernetes on VMs running Ubuntu 10.04 LTS, and running into a problem when trying to initialise the system, the kubeadm init command results in failure (full log below).

VM: 2 CPUs, 512mb RAM, 100 gig disk, running under VMWare ESXi6.

OS: Ubuntu 18.04 LTS server install, fully updated via apt update and apt upgrade before beginning the Docker and Kubernetes installs.

Docker installed as per instructions here, install completes with no errors: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

Kubernetes installed as per instructions here, except for the Docker section (as following those instructions produces a PreFlight error re systemd/cgroupfs): https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/

All installation appears to proceed smoothly with no errors reported, however attempting to start Kubernetes then fails, as shown in the log below.

I am entirely new to both Docker and Kubernetes though I get the main concepts and have experimented with the on-line tutorials on kubernetes.io, but until I can get a working system installed I'm unable to progress further. At the point at which kubeadm attempts to start the cluster, everything hangs for the four minutes, and then exits with the timeout as shown below.

root@k8s-master-dev:~# sudo kubeadm init --pod-network-cidr=10.244.0.0/16  [init] Using Kubernetes version: v1.15.3  [preflight] Running pre-flight checks  [preflight] Pulling images required for setting up a Kubernetes cluster  [preflight] This might take a minute or two, depending on the speed of your internet connection  [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  [kubelet-start] Activating the kubelet service  [certs] Using certificateDir folder "/etc/kubernetes/pki"  [certs] Generating "ca" certificate and key  [certs] Generating "apiserver-kubelet-client" certificate and key  [certs] Generating "apiserver" certificate and key  [certs] apiserver serving cert is signed for DNS names [k8s-master-dev kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.24.0.100]  [certs] Generating "front-proxy-ca" certificate and key  [certs] Generating "front-proxy-client" certificate and key  [certs] Generating "etcd/ca" certificate and key  [certs] Generating "etcd/server" certificate and key  [certs] etcd/server serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]  [certs] Generating "etcd/peer" certificate and key  [certs] etcd/peer serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]  [certs] Generating "etcd/healthcheck-client" certificate and key  [certs] Generating "apiserver-etcd-client" certificate and key  [certs] Generating "sa" key and public key  [kubeconfig] Using kubeconfig folder "/etc/kubernetes"  [kubeconfig] Writing "admin.conf" kubeconfig file  [kubeconfig] Writing "kubelet.conf" kubeconfig file  [kubeconfig] Writing "controller-manager.conf" kubeconfig file  [kubeconfig] Writing "scheduler.conf" kubeconfig file  [control-plane] Using manifest folder "/etc/kubernetes/manifests"  [control-plane] Creating static Pod manifest for "kube-apiserver"  [control-plane] Creating static Pod manifest for "kube-controller-manager"  [control-plane] Creating static Pod manifest for "kube-scheduler"  [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"  [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s  [kubelet-check] Initial timeout of 40s passed.    Unfortunately, an error has occurred:          timed out waiting for the condition    This error is likely caused by:          - The kubelet is not running          - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:          - 'systemctl status kubelet'          - 'journalctl -xeu kubelet'    Additionally, a control plane component may have crashed or exited when started by the container runtime.  To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.  Here is one example how you may list all Kubernetes containers running in docker:          - 'docker ps -a | grep kube | grep -v pause'          Once you have found the failing container, you can inspect its logs with:          - 'docker logs CONTAINERID'  error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster  

I've had a look at both the log journal data and the docker logs but other than lots of timeouts, can't see anything that explains the actual error. Can anyone advise where I should be looking, and what's most likely to be the cause of the problem?

Things already tried: Removing all IPTables rules and setting defaults to "accept". Running with Docker install as per the vitux.com instructions (gives a PreFlight warning but no errors, but same timeout on attempting to init Kubernetes).

Update: Following from @Crou's comment, here is what happens now if I try just 'kubeadm init' as root:

root@k8s-master-dev:~# uptime   16:34:49 up  7:23,  3 users,  load average: 10.55, 16.77, 19.31  root@k8s-master-dev:~# kubeadm init  [init] Using Kubernetes version: v1.15.3  [preflight] Running pre-flight checks  error execution phase preflight: [preflight] Some fatal errors occurred:          [ERROR Port-6443]: Port 6443 is in use          [ERROR Port-10251]: Port 10251 is in use          [ERROR Port-10252]: Port 10252 is in use          [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists          [ERROR Port-10250]: Port 10250 is in use          [ERROR Port-2379]: Port 2379 is in use          [ERROR Port-2380]: Port 2380 is in use          [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty  [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`  

Re the very high load shown bu uptime, that starts as soon as the init is first attempted and load remains very high unless a kibeadm reset is done to clear everything down.

EventViewer Error "local computer may not have the necessary registry"

Posted: 01 Mar 2022 01:03 AM PST

I am trying to review event viewer logs that was archived from another Server.

When accessed, the events are listed properly, but details of each event give the following error:

The description for Event ID .... in Source "Microsoft-Windows-Security-Auditing" cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer

Unfortunately, the server version where those logs we're archived is inaccessible at the moment.

Is there any way to get the details of the archived event?

Give Xen HVM DomU Control of Wifi Card

Posted: 28 Feb 2022 09:59 PM PST

I'm running a Mikrotik Cloud Hosted Router as an HVM DomU under Xen. How do I give it full control of the wifi card instead of my Dom0? The DomU needs to be able to associate and disassociate with networks, as well as host them, so NAT and bridging don't suit my circumstances.

Turn on Gzip for combined JS or CSS files without file extension

Posted: 28 Feb 2022 10:05 PM PST

I'm trying to configure GZip on my nginx server. It works for files with an file-extension.

To make a decision what kind of file is served over the network, Nginx does not analyze the file contents ... Instead, it just looks up the file extension to determine its MIME type

So when I have a combine css file without a file extension it doesn't know it needs to be gzipped and serves it plain.

Is there a way to let nginx know that everything served from a specified location always needs to be gzipped. With or without an file extension?

Windows Server Backup is doing incremental instead of full backup of Exchange data

Posted: 28 Feb 2022 11:08 PM PST

I am backing up an Exchange Server database to a backup volume on Windows Server 2012 R2, using Windows Server Backup.

I mostly followed the tutorial shown at http://exchangeserverpro.com/backup-exchange-server-2013-databases-using-windows-server-backup/

I hope to backup data, and also remove old Exchange log files.

The backup is successful, but the log files are not being removed/truncated.

Exchange does not record a full backup in the database settings page. The "Details" panel for the last backup records the last backup as VSS Full backup, successful, but in the "items" list, both C and D are described as "Backup Type": "Incremental".

I cannot find any further settings to control if backup is "Full" or "Incremental" except on the VSS settings, which is set to Full.

Any suggestions?

PostgreSQL install from source on a systemd distro

Posted: 28 Feb 2022 10:05 PM PST

On my current server I have 2 versions of postgresql installed, postgresql-9.1 and postgresql-9.2

I installed them from source from the postgresql website.

The tar.gz folder supplies the install files as well as the start-scripts which can be used to run it. I have copied these start-scripts from each postgresql install as

/etc/rc.d/init.d/postgresql91  /etc/rc.d/init.d/postgresql92  

so that I can

service postgresql91 start  

or

service postgresql92 start  

and use them independantly

However I am trying to do the same thing on a systemd linux (Fedora 22 server) and there was a warning in the init.d folder telling me that it has changed.

How will I be able to use the start-scripts supplied by postgresql to run the database?

OpenVSwitch between namespaces

Posted: 01 Mar 2022 12:45 AM PST

I'm trying to configure a bridge between two TAP interfaces each created inside their own network namespace, on Linux. I'm using OpenVSwitch as software bridge.

These are the steps that I believe should work:

ip netns add test_ns1  ip netns exec test_ns1 ip tuntap add mode tap testif1  ip netns exec test_ns1 ip addr add 192.168.1.1/24 dev testif1  ip netns exec test_ns1 ip link set testif1 up    ip netns add test_ns2  ip netns exec test_ns2 ip tuntap add mode tap testif2  ip netns exec test_ns2 ip addr add 192.168.1.2/24 dev testif2  ip netns exec test_ns2 ip link set testif2 up    ovs-vsctl add-br test_br  ip netns exec test_ns1 ovs-vsctl add-port test_br testif1  ip netns exec test_ns2 ovs-vsctl add-port test_br testif2    ip netns exec test_ns1 ping -c 2 192.168.1.1  ip netns exec test_ns2 ping -c 2 192.168.1.2  ip netns exec test_ns1 ping -c 2 192.168.1.2  ip netns exec test_ns2 ping -c 2 192.168.1.1  

All four ping commands will not work and report 100% packet loss.

I would expect to be able to ping the interface from inside its own namespace (testif1 from test_ns1, for example). I can do that with the Quantum interfaces, but not with mine, why?

Then, I am quite sure OpenVSwitch is installed correctly because I am running the stock Ubuntu version and I have OpenStack Quantum running on the same machine.

Apache, PHP, MySQL Work faster in Linux than Windows?

Posted: 28 Feb 2022 11:16 PM PST

I currently develop for Drupal CMS using The Uniform Server. I also tried Xampp and WampServer. Loading each page of drupal take long more than 50 second which is really painful.

My Computer is:

  • List item
  • CPU: AMD Sempron Processor LE-1100 1.90 GHz
  • Ram: 2 GB DDR II
  • OS: 64 Bit

Here is my question: Does Apache+MySQL+PHP work faster on Linux (CentOs 5.5)? If the answer is yes, How much faster it will be? I like to know is it reasonable and useful to Linux?

Restore old Backup Exec tapes with NTbackup

Posted: 01 Mar 2022 12:14 AM PST

I've got old backup tapes made with previous versions (v10 and v12) that I need to pull data from (related to this prior question).

I have a machine set up with Windows Server 2008 and a trial version of Backup Exec 2010. It appears to be able to access the tapes and such when I run Inventory/Catalog commands, but each "inventory" command spins up the drive for a moment then looks (and sounds) like it's doing nothing after a few minutes, and the Job Monitor just shows the job "running."

My main question is -- is there an easier way to read these tapes than going through the whole song and dance of inventory / catalog / scan / etc that BE wants you to do? It was previously suggested to me to try using NTBackup to restore files from tape, but it looks like tape drive support for that was removed in Server 2008 (naturally). All I really need to do is scan the contents of each tape individually and be able to restore data from each - but the typical BE process seems overly complicated to me...

UPDATE - 2011-Feb-09

I've now got a Windows Server 2003 set up with the LTO drive and I'm just trying to use NTbackup to restore. When I open ntbackup.exe, I can see the "LTO Ultrium" drive as a device, but the tape that's loaded is not cataloged. How do you catalog a tape with NTbackup? I see the option to "catalog a backup file", but that asks to browse to disk somewhere for a .bkf file...

No comments:

Post a Comment