Sunday, August 1, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Can we install GCP in our own DC as we do with Openstack. If yes what is the hardware requirements ? Se

Posted: 01 Aug 2021 09:55 PM PDT

Can we install GCP in our own DC as we do with Openstack. If yes what is the hardware requirements ? How billing works in GCP.

Ansible playbook to install Java from within Cygwin

Posted: 01 Aug 2021 09:19 PM PDT

Is it possible to install Java on Windows through Cygwin and ansible roles?

I've been told to install Java using ansible playbook but ansible is running inside cygwin. I'm not clear about the fact if the Java has to be inside cygwin or on windows. But I wanted to know if its possible both ways using ansible playbook running inside cygwin.

KVM nat command line

Posted: 01 Aug 2021 08:57 PM PDT

What is the correct way to setup NAT networking between KVM vm and host?

KVM vm:

No firewall Installed

$ sudo arp-scan -r 5 -t 1000 --interface=eth0 --localnet

10.0.2.2     52:55:0a:00:02:02    locally administered  10.0.2.3     52:55:0a:00:02:03    locally administered  

$ ip r

default via 10.0.2.2 dev eth0 proto dhcp metric 100  10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100  

ifconfig

eth0: inet 10.0.2.15 netmask 255.255.255.0 broacast 10.0.2.255        ether 52:54:00:12:34:56  lo: inet 127.0.0.1 netmask 255.0.0.0        inet6 ::1  

Host:

:~$ ip r

0.0.0.0/1 via 10.211.1.10 dev tun0   default via 192.168.1.1 dev wlan0 proto dhcp metric 600   10.21xxxxxxxx dev tun0 proto kernel scope link src 10.21xxxxx   xxxxxxxxxxxx dev wlan0   128.0.0.0/1 via 10.211.1.10 dev tun0   192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.172 metric 600   192.168.4.0/22 dev eth0 proto kernel scope link src 192.168.4.8 metric 100   

:~$ ifconfig

 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 10.0.2.3  netmask 255.0.0.0  broadcast 10.255.255.255      inet6 fe80::76c8:79b4:88d4:7f5c  prefixlen 64  scopeid 0x20<link>      ether ec:8e:b5:71:33:6e  txqueuelen 1000  (Ethernet)      RX packets 1700  bytes 194730 (190.1 KiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 2862  bytes 246108 (240.3 KiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0      device interrupt 16  memory 0xe1000000-e1020000       lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536      inet 127.0.0.1  netmask 255.0.0.0      inet6 ::1  prefixlen 128  scopeid 0x10<host>      loop  txqueuelen 1000  (Local Loopback)      RX packets 13251  bytes 7933624 (7.5 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 13251  bytes 7933624 (7.5 MiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500      inet 10.211.1.69  netmask 255.255.255.255  destination 10.211.1.70      inet6 fe80::a920:941c:ffa8:5579  prefixlen 64  scopeid 0x20<link>      unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)      RX packets 4348  bytes 2242726 (2.1 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 3823  bytes 404190 (394.7 KiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      inet 192.168.1.172  netmask 255.255.255.0  broadcast 192.168.1.255      inet6 fe80::651b:5014:7929:9ba3  prefixlen 64  scopeid 0x20<link>      ether d8:55:a3:d5:d1:30  txqueuelen 1000  (Ethernet)      RX packets 114455  bytes 117950099 (112.4 MiB)      RX errors 0  dropped 0  overruns 0  frame 0      TX packets 67169  bytes 14855011 (14.1 MiB)      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0   

~$ sudo arp-scan -r 5 -t 1000 --localnet

just hangs......  

Host can not ping 10.0.2.2

No firewall enable

Tried

$ sudo ip route add default via 10.0.2.0  $ sudo ip route add default via 10.0.2.2  $ sudo ip route add default via 10.0.2.0/24  

Can NAT work without virsh ?

Can NAT be fixed from command line only ?

Append URL parameter to request URI using Nginx

Posted: 01 Aug 2021 08:54 PM PDT

I'm trying to append URL parameters to specific request URI's within the server block.

This is what I have so far:

if ( $request_uri = "/testing/signup" ) {      rewrite ^ https://www.example.com/testing/signup?org=7689879&type_id=65454 last;  }    location /testing/ {      try_files $uri $uri/ /testing/index.php;  }  

However, this only works when the original request URI does not have any of it's own URL parameters (e.g. www.example.com/testing/signup?abc=hello) I want to keep the original URL params and add my own.

I've tried changing the regex to if ( $request_uri ~* "^/testing/signup" ) { but this causes a loop.

Can anyone help?

What does SLIRP stand for?

Posted: 01 Aug 2021 10:17 PM PDT

SLIP stands for Serial Line Internet Protocol. Both this, and its associated SLIRP, are very old; but SLIRP (Danny Gasparovski, 1995) is still used in one form for user-space virtual networking in QEMU.

What is the R in SLIRP? Is SLIRP a backronym, or just a playful name?

something is "blocking" my images from loading faster tried apache configs, nginx, cache header stuff, nothing changes

Posted: 01 Aug 2021 07:14 PM PDT

THE ISSUE : the images in my site takes a bit to start loading... like if something is blocking, but unable to pinpoint what could be the cause.

PHP 8 / latest WHM

LIVE EXAMPLE : https://www.stubfeed.com/crypto

WHAT I TRYED [.0A.] : changing web hosting company => no change, there is something in my web code...

WHAT I TRYED [.0B.] : I was with centos 7 and changed to centos 8 => still the same

WHAT I TRYED [.0C.] : Disabling php-fpm => it helped a lot, but still the wait for images is there.

WHAT I TRYED [.0D.] : ouput buffers on or off => still the same

WHAT I TRYED [.1.] : merge all my .css and .js in fewer files ( 2.css 2.js instead of multiple) // also added font-display:swap;

WHAT I TRYED [.2.] : nginx reverse proxy for apache (on or off => no change since cloudflare acts as a CDN...)

WHAT I TRYED [3] : apache configs

Header set X-XSS-Protection "1; mode=block"  Header set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"  Header always set X-Content-Type-Options "nosniff"  Header always set Referrer-Policy "no-referrer"  Header always set Permissions-Policy "geolocation=(self),midi=(self),sync-xhr=(self),magnetometer=(self),gyroscope=(self),fullscreen=(self)"  

WHAT I TRYED [4] : .htaccess cache control headers

<IfModule mod_headers.c>      ExpiresActive On      <FilesMatch "\.(webp|webm|flv|ico|pdf|avi|mov|mp4|m4v|m4a|ppt|doc|mp3|wmv|wav|gif|jpg|jpeg|png|apng|swf|css|js|otf|ttf|woff)$">      Header set Cache-Control "max-age=31536000, public"      </FilesMatch>  </IfModule>  

WHAT I TRYED [5] : convert ALL my images in webp -quality 60 (with iMageMagick) => it reduced the size a lot ;) ==>> but still didn't fixed the issue.

WHAT I TRIED [6] : in google chrome console => "empty cache hard reload" => seems faster to normal load ???

WHAT I TRIED [7] : in google chrome console => network tab => I see loaded from cache but somehow with my eyes, I see the menu very fast but all other images, it waits for 1 or 2 seconds before showing up

WHAT I TRIED [8] : add a page rule in cloudflare https://www.stubfeed.com/media/* Browser Cache TTL: a year, Cache Level: Cache Everything, Edge Cache TTL: 14 days, Cache Deception Armor: On, Origin Cache Control: On

WHAT I TRIED [9] : Load 4, 6, 9, 23 images => al the same, still : what I see with my eyes (wait 1 sec, then images appears....) [the speed testers says ...]

WHAT I TRIED [10] : service worker with caching all fecth... => I turned it off because checking from cache before was terribly slower... :( you can see it there : https://www.stubfeed.com/ws.js?v=202108000021 (in cachable array I added wepb, but I removed it...)

THE RESULT WITH ALL THOSE TRIES : the menu, the .css, the .js ==>> very fast and instantaneous... but still images after 1 (some times 2) seconds...

you can see a report here : https://gtmetrix.com/reports/www.stubfeed.com/4MGVqAFv/

enter image description here enter image description here

curl -I https://www.stubfeed.com/media/feed/2021/08/02/14463960829226.webp    HTTP/2 200   date: Mon, 02 Aug 2021 02:13:44 GMT  content-type: image/webp  content-length: 78296  x-content-type-options: nosniff  referrer-policy: no-referrer  permissions-policy: geolocation=(self),midi=(self),sync-xhr=(self),magnetometer=(self),gyroscope=(self),fullscreen=(self)  last-modified: Mon, 02 Aug 2021 01:40:11 GMT  vary: Accept-Encoding  x-xss-protection: 1; mode=block  strict-transport-security: max-age=15552000; includeSubDomains; preload  cache-control: public, max-age=31536000  cf-cache-status: HIT  age: 1483  accept-ranges: bytes  expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"  report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=QawI6BH%2FlcrvvsUxJ6VgTJzwNqvb%2BVqBRo1gbxng6JRggJe9MgZbkWhLtMjmwvwC8yB3SLaoJXT%2BvNZDuLCzya1g8HlxGFbpFMzXB%2F4p%2B9GcIGIZANk%2FjfvIx0Lu2t07QIsr"}],"group":"cf-nel","max_age":604800}  nel: {"report_to":"cf-nel","max_age":604800}  server: cloudflare  cf-ray: 6783c9ea7f3c4bd1-YUL  alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400, h3=":443"; ma=86400  

How to create a shared folder as a domain admin in DC environment?

Posted: 01 Aug 2021 04:58 PM PDT

how can I create a shared folder on the windows server? I am a domain admin with a full administrative right logged on from windows 10 domain-joined PC.

I was able to access the server drive using UNC path "\srvname\c$", however, I can't edit the share permission of the newly created folder even though I have administration rights.

Is there any other way for a domain admin to manage share folders? Are some tools similar to the Administrative Tool available to handle such?

Updated VirtIO SCSI driver on my Windows 2012 R2, and now only boots into recovery. How to recover?

Posted: 01 Aug 2021 08:44 PM PDT

I have a Hetzner W2012 R2 Cloud VM, and I updated the VirtIO vioscsi driver on it. Now it only boots into recovery mode and I cannot access the C-drive even from command prompt.

Is there any way to save this VM or is it a goner?
I have no (recent) snapshot of it :(

I can mount the old previous VirtIO ISO, but how can I reinstall the old driver when I have no access to the C-drive?

Update 1:
Am able to get access to the C-drive with drvload vioscsci.inf.
Still trying to figure out how to reinstall the old drivers.

Update 2
I used dism /image:e:\ /add-driver /driver:vioscsi.inf to add the driver but it still won't boot.

Update 3
Startup repair gives error 0x490.

No clue what to do next or how to even troubleshoot why it is not booting.

Change port 80 to IPv4 unspecified instead of loopback on server 2019

Posted: 01 Aug 2021 04:59 PM PDT

So i have this strange problem where i can't install milestone system on server 2019. The error says that machine repeatedly refuses the connection. When i check with netstat port 80 is used by 127.0.0.1

I have tried stopping the www publishing services but it doesn't help. uninstalling all web apps doesn't work either.

So i tested it on my earlier backup images with minimum programs installed, and it works. But due to amount of changes i have made to the machine until now, i'd rather not reinstalling everything cause it'd take alot more work and downtime.

On my image with working machine installation, i noticed on resource monitor > listening ports, port 80 is on ipv4 unspecified, whereas on my current image, it says port 80 is on ipv4 loopback.

So how would i change it so that it becomes unspecified again?

Centos Infected By Kinsing Mining Malware [duplicate]

Posted: 01 Aug 2021 08:57 PM PDT

My server has been infected by Kinsing malware. I managed to clear it. Remove the cronjob. So far, it didn't return. However, I noticed, another running processes look fishy. Normally, I haven't focused on this item except my regular services like MySQL, apache because they are normally on top of the processes. Now I became paranoid.

Do you spot any culprit here?

running services

Compute Engine System service account service permissions issue

Posted: 01 Aug 2021 04:32 PM PDT

I am trying to setup an instance schedular for my VM instance to start and end at particular time. I am getting an error permission related to my google service account when trying to add the vm instance to the scheduler. I have added this roles (Compute Instance Administrator (Version 1),Compute administrator) to my service account via IAM but still getting the same error. the error message is "Compute Engine System service account service-xxx needs to have [compute.instances.start, compute.instances.stop] permissions applied in order to perform this operation"

Anyone who has faced similar issue please suggest on how to fix it?

Br, Ramkrishna

Postfix doesn't send mail

Posted: 01 Aug 2021 04:53 PM PDT

I am trying to learn about mail servers and am quite daunted by all of the moving pieces in the setup..

Currently I am unable to send via from my smtp server. I can receive mails fine from various domains.

In my logs I see the below entries when trying to send mail..

  • mail.log
Jul 31 11:59:15 mail postfix/submission/smtpd[1290]: connect from unknown[78.136.22.232]  Jul 31 12:01:03 mail postfix/postscreen[1294]: CONNECT from [209.85.214.179]:46881 to [45.77.139.149]:25  Jul 31 12:01:03 mail postfix/postscreen[1294]: PASS OLD [209.85.214.179]:46881  Jul 31 12:01:03 mail postfix/smtpd[1297]: connect from mail-pl1-f179.google.com[209.85.214.179]  Jul 31 12:01:05 mail postfix/smtpd[1297]: 03178A46: client=mail-pl1-f179.google.com[209.85.214.179]  Jul 31 12:01:05 mail postfix/cleanup[1299]: 03178A46: message-id=<CAMJ_LmbWv_5o7GZJx8RwqOqz52O_sWTrgX0_=mSi_Vt6+uz++Q@mail.gmail.com>  Jul 31 12:01:05 mail postfix/cleanup[1299]: 03178A46: milter-reject: END-OF-MESSAGE from mail-pl1-f179.google.com[209.85.214.179]: 5.7.1 Gtube pattern; from=<kareltsvetkov@gmail.com> to=<hostmaster@96-fromsofia.net> proto=ESMTP helo=<mail-pl1-f179.google.com>  Jul 31 12:01:05 mail postfix/smtpd[1297]: disconnect from mail-pl1-f179.google.com[209.85.214.179] ehlo=2 starttls=1 mail=1 rcpt=1 data=0/1 quit=1 commands=6/7  Jul 31 12:04:15 mail postfix/submission/smtpd[1290]: timeout after CONNECT from unknown[78.136.22.232]  Jul 31 12:04:15 mail postfix/submission/smtpd[1290]: disconnect from unknown[78.136.22.232] commands=0/0  root@mail:~#  

My OS is Debian 9 and all iptables rules have been flushed while testing this. Here are my master.cf and main.cf --> https://pastebin.com/9WUnzYZt

Thanks

EDIT1: Something I forgot to mention. I did grep 'smtp.*restrict' from my main.cf. I noticed that the smtpd_recipient_restrictions is pointing to a a sql file. The sql file itself has the below query.. Can this be the reason for the outgoing mail to fail?

root@mail:~# grep smtp.*restrict /etc/postfix/main.cf  smtpd_relay_restrictions =      reject_non_fqdn_recipient  smtpd_recipient_restrictions = check_recipient_access mysql:/etc/postfix/sql/recipient-access.cf  smtpd_client_restrictions =     permit_mynetworks  smtpd_helo_restrictions =   permit_mynetworks  smtpd_data_restrictions = reject_unauth_pipelining  root@mail:~# grep query /etc/postfix/sql/recipient-access.cf  query = select if(sendonly = true, 'REJECT', 'OK') AS access from accounts where username = '%u' and domain = '%d' and enabled = true LIMIT 1;  root@mail:~#  

EDIT2: I configured mutt to rule out this being a client side issue but still I can't seem to get it working..

└─[$] <> cat muttrc  set folder = "imaps://hostmaster@96-fromsofia.net@imap.96-fromsofia.net:993"  set smtp_url =  "smtp://hostmaster@96-fromsofia.net@smtp.96-fromsofia.net:587"    set from = "hostmaster@96-fromsofia.net"  set realname = "hostmaster"    set spoolfile = "+Inbox"  set record = "+Sent"  set trash = "+Trash"  set postponed = "+Drafts"    mailboxes =Inbox =Sent =Trash =Drafts =Junk =Other  

however whenever I try to send mail, regardless if it's another domain or the same email address as the sender I see this error

SMTP session failed: 451 4.3.0 <hostmaster@96-fromsofia.net>: Temporary lookup failure   

POSTFIX: New mail server. Can neither send nor receive mail

Posted: 01 Aug 2021 04:56 PM PDT

I am trying to learn about mail servers and have setup one. However neither can I send, nor receive mail. I can login to my email account and I see the folders (Inbox, Sent, Trash, etc) are being populated within the email client. Also /var/log/syslog indicates that my user has successfully logged in via imap-login

When I click on the test button from Thunderbird before logging in, the test always fails with: "Thunderbird failed to find the settings for your email account" I have turned off the firewall on the server completely. My DNS records I believe are properlly set up as an MX lookup returns no errors/warnings. I have verified the mysql user specified within dovecot does have permissions over the database used for mail, the domain, alias and accounts tables have all been created. hostmaster@mydomain.abc has been imported into the accounts table.

Any ideas?

This is the guide I followed to set it up and I am as well using Debian 9. Thanks!

How to mount existing Intel Raid ISM (raid 0) device on linux?

Posted: 01 Aug 2021 04:45 PM PDT

I have two 1TB drives from 2011 that I need to get the data off of ASAP. Both of these drives are working and were configured as RAID0 under a windows vista machine. Both drives are plugged into my fedora machine, and I need to mount them as a raid device so I can pull data off. The first drive is shown as /dev/sda and the second drive is /dev/sdb. The machine I was using configured some part of the raid array in BIOS, where my current machine cannot perform such an operation. Is there a way to mount these drives on my machine to access the data?

By running the command "sudo mdadm -E /dev/sda /dev/sdb" this is the output:

mdadm: /dev/sda is not attached to Intel(R) RAID controller.  mdadm: /dev/sda is not attached to Intel(R) RAID controller.  mdadm: /dev/sdb is not attached to Intel(R) RAID controller.  mdadm: /dev/sdb is not attached to Intel(R) RAID controller.  /dev/sda:            Magic : Intel Raid ISM Cfg Sig.          Version : 1.0.00      Orig Family : 004c6f77           Family : 004c6f77       Generation : 000002cd       Attributes : All supported             UUID : 6dcf9662:d3c699ac:3b3da1b4:011a761c         Checksum : 0c68413e correct      MPB Sectors : 1            Disks : 2     RAID Devices : 1    [3Play]:             UUID : 094c270d:4dd4aead:f6cc4ab7:e3a2fa3b       RAID Level : 0          Members : 2            Slots : [UU]      Failed disk : none        This Slot : ?      Sector Size : 512       Array Size : 3907039232 (1863.02 GiB 2000.40 GB)     Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)    Sector Offset : 0      Num Stripes : 7630936       Chunk Size : 128 KiB         Reserved : 0    Migrate State : idle        Map State : normal      Dirty State : clean       RWH Policy : off      Disk00 Serial : 9VP0XDQA            State : active               Id : 00040000      Usable Size : 1953514766 (931.51 GiB 1000.20 GB)      Disk01 Serial : 9VPCVD99            State : active               Id : 00050000      Usable Size : 1953514766 (931.51 GiB 1000.20 GB)  /dev/sdb:            Magic : Intel Raid ISM Cfg Sig.          Version : 1.0.00      Orig Family : 004c6f77           Family : 004c6f77       Generation : 000002cd       Attributes : All supported             UUID : 6dcf9662:d3c699ac:3b3da1b4:011a761c         Checksum : 0c68413e correct      MPB Sectors : 1            Disks : 2     RAID Devices : 1    [3Play]:             UUID : 094c270d:4dd4aead:f6cc4ab7:e3a2fa3b       RAID Level : 0          Members : 2            Slots : [UU]      Failed disk : none        This Slot : ?      Sector Size : 512       Array Size : 3907039232 (1863.02 GiB 2000.40 GB)     Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)    Sector Offset : 0      Num Stripes : 7630936       Chunk Size : 128 KiB         Reserved : 0    Migrate State : idle        Map State : normal      Dirty State : clean       RWH Policy : off      Disk00 Serial : 9VP0XDQA            State : active               Id : 00040000      Usable Size : 1953514766 (931.51 GiB 1000.20 GB)      Disk01 Serial : 9VPCVD99            State : active               Id : 00050000      Usable Size : 1953514766 (931.51 GiB 1000.20 GB)  

As you can see by the output, the drives are both alive, and the previous raid configuration is recognized, however I cannot mount this.

After issuing the command "sudo IMSM_NO_PLATFORM=1 mdadm --assemble --verbose /dev/md0 /dev/sda /dev/sdb" I get this output:

mdadm: looking for devices for /dev/md0  mdadm: /dev/sda is identified as a member of /dev/md0, slot -1.  mdadm: /dev/sdb is identified as a member of /dev/md0, slot -1.  mdadm: added /dev/sdb to /dev/md0 as -1  mdadm: added /dev/sda to /dev/md0 as -1  mdadm: Container /dev/md0 has been assembled with 2 drives  

and finally after trying to mount it with "sudo mount /dev/md0 /mnt", I get the following output:

mount: /mnt: can't read superblock on /dev/md0.  

after using the GUI tool gnome-disks, /dev/md0 shows up but says "Block device is empty"

I am probably going about mounting these incorrectly, and I need help as soon as possible. Thanks.

Cron is not running from docker container... failed

Posted: 01 Aug 2021 07:03 PM PDT

I am trying create cron a task in docker container. Everything are configured according to the @VonC 's answer
My dockerfile looks like this

FROM python:3.6.9FROM python:3.6.9      WORKDIR usr/src/mydir  COPY requirements.txt .    # Add crontab file in the cron directory  ADD crontab /etc/cron.d/hello-cron    # Give execution rights on the cron job  RUN chmod 0644 /etc/cron.d/hello-cron    # Create the log file to be able to run tail  RUN touch /var/log/cron.log    #Install Cron  RUN apt-get update  RUN apt-get -y install cron    # Run the command on container startup  CMD cron && tail -f /var/log/cron.log    RUN pip install --no-cache-dir -r requirements.txt  COPY . .  

But the cron service doesn't start up by default

[FAIL] cron is not running ... failed!  

the cron service starts work after pushing it explicitly from the container

service cron start  

what's wrong?

Create Google Cloud Managed SSL Certificate for a subdomain

Posted: 01 Aug 2021 09:07 PM PDT

I have my main domain www.example.com hosted on Route 53 on AWS.

I've created the custom domain on Google Cloud sub.example.com and set the appropriate NS records.

What I want to do now is create a new managed SSL certificate for this subdomain as shown below:

enter image description here

Is this possible? Is it good practice given that I want to continue adding more subdomains like sub1.example.com and creating a certificate for each one? Since I am keeping example.com hosted at Route 53, I don't think I can create a single managed SSL certificate for all of the possible subdomains that I may have on Google Cloud?

CA root install issue on Ubuntu 16.04 LTS Server

Posted: 01 Aug 2021 10:02 PM PDT

I am setting up nginx reverse proxy on Ubuntu 16.04 LTS Server for Web-Server.

There will three different Services running on Ubuntu. Services will communicate each other using API calls. Similarly Clients(Browser/Mobile) will communicate the Services using API calls.

To validate Client SSL requests, I need to install certificates on Server PC. After Authentication only request will get processed or forwarded to process further.

Each Service I have Certificates of following format files. Ser1_cert.crt, Ser1_key-decryp.key, Ser1_keyfile.key, Ser1_exported.pfx

lly Ser2_..., Ser3_... files are available  

For CA, I have CA_50EA.crt, CA_50EA.pfx files are available

I don't have clarity on which certificate file format I suppose to use for Certificate installation.

To install certificates on System I followed following steps. 1. I copied .crt extension files (including CA file) into /usr/share/ca-certificates then I run the below command. 2. sudo dpkg-reconfigure ca-certificates

It showed the UI to select certificates, I selected all. At the end it showed the no. of certificates added.

I put the certificate path properly in nginx.conf file. I copied .crt and .key files to /etc/ssl/certs/ and /etc/ssl/private/ respectively.

ssl_certificate "/etc/pki/certs/XServer_certificate.crt"; ssl_certificate_key "/etc/ssl/private/XServer_decryp.key";  

Now I tried to login to Server from the Browser client, but the login Certificate validation got failed with the following Server log message.

info: LoginController[0]    Certification Error :unable to get local issuer certificate  

To reconfirm again, I copied all the certificate files into /usr/share/ca-certificates directory (total 14 files), then I again executed sudo dpkg-reconfigure ca-certificates. This time also getting same error.

Next time I ran sudo update-ca-certificates --fresh command It showed the no. of certificates installed with a warning for ca.pem file as below.

WARNING: CA_50EA.pem does not contain a certificate or CRL: skipping 152 added, 0 removed; done.  

.pem file is created in /etc/ssl/certs folder, but I inputted only .crt file in /usr/share/ca-certificates

I tried to verify using below command, there also I seeing same issue as below.

openssl s_client -connect [server name]:443 -showcerts -CAfile /etc/ssl/certs/ca-certificates.crt  ................. ................. Verify return code: 21 (unable to verify the first certificate)  

It tells very clear verification failed.

Any step I'm missing, what is the right approach to do this. By seeing error, I thought CA not installed properly.

I read many blogs, most of them explaining same, but it is not working for me.

some time before I worked on CentOS7, there I used following commands to install certificates on the system. update-ca-trust force-enable, update-ca-trust extract In CentOS Clients certificate validation happening fine with the above commands.

Any feedback is appreciated.

Thanks

Postfix error with remote mailserver - Error 550

Posted: 01 Aug 2021 10:02 PM PDT

i am new to mailservers and i think i totaly destroyed the config and i have no clue how to fix this mess. For my understanding things are "OK" but i think i'm missing something important.

I replaced IPs with SERVER1 and SERVER2 and the domain with MYDOMAIN.

Setup:

2 Hetzner Server: One for Website (SERVER1 - VPS), another one for mails and other stuff (SERVER2 - Webhosting) The website needs to send smtp mails with PHP and this part is working. But if the user mail is user@MYDOMAIN.com, it wont be delivered and i get the errors:

status=bounced (host mail.MYDOMAIN.com[SERVER2] said: 550 Sender verify failed (in reply to MAIL FROM command))    status=bounced (host mail.MYDOMAIN.com[SERVER2] said: 550 Unroutable address (in reply to RCPT TO command))  

I tried so many different things and configurations but neither of them works. Here is where it is at right now.

DNS:

$TTL 7200  @   IN SOA ns1.your-server.de. postmaster.your-server.de. (      2018033101   ; serial      86400        ; refresh      10800        ; retry      3600000      ; expire      86400 )      ; minimum    @                        IN NS      ns3.second-ns.de.  @                        IN NS      ns.second-ns.com.  @                        IN NS      ns1.your-server.de.    @                        IN A       SERVER1  entwickler               IN A       SERVER2  mail                     IN A       SERVER2  www                      IN A       SERVER1  autoconfig               IN CNAME   mail.your-server.de.  ftp                      IN CNAME   www  imap                     IN CNAME   mail  pop                      IN CNAME   mail  relay                    IN CNAME   mail  smtp                     IN CNAME   mail  @                        IN MX 10   mail  _autodiscover._tcp       IN SRV     0 100 443 mail.your-server.de.  @                        IN TXT     "v=spf1 +a +mx ?all"  

Hostname SERVER1:

hostname -> mail  hostname -f -> mail.MYDOMAIN.com  

Postfix config:

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  biff = no  append_dot_mydomain = no    readme_directory = no    # TLS parameters  smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  smtpd_use_tls=yes  smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache    smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = mail.MYDOMAIN.com  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  mydestination = MYDOMAIN.com, localhost  relayhost =  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = all  

I am very thankful for any kind of advice and help you guys can give me. I'm trying to learn but mailservers are a real diva...

Let me know if you need some more informations.

UPDATE:

After I changed the origin and destination the error changed to something i already had before:

status=bounced (unknown user: "USER")  

Here is the full log:

Apr  5 09:09:19 mail postfix/pickup[29043]: B30A82645F9: uid=33 from=<www-data>  Apr  5 09:09:19 mail postfix/cleanup[29052]: B30A82645F9: message-id=<20180405070919.B30A82645F9@mail.MYDOMAIN.com>  Apr  5 09:09:19 mail postfix/qmgr[29044]: B30A82645F9: from=<www-data@MYDOMAIN.com>, size=426, nrcpt=1 (queue active)  Apr  5 09:09:19 mail postfix/local[29054]: B30A82645F9: to=<USER@MYDOMAIN.com>, relay=local, delay=0.04, delays=0.02/0/0/0.01, dsn=5.1.1, status=bounced (unknown user: "USER")  Apr  5 09:09:19 mail postfix/cleanup[29052]: BAA252645FC: message-id=<20180405070919.BAA252645FC@mail.MYDOMAIN.com>  Apr  5 09:09:19 mail postfix/bounce[29055]: B30A82645F9: sender non-delivery notification: BAA252645FC  Apr  5 09:09:19 mail postfix/qmgr[29044]: BAA252645FC: from=<>, size=2296, nrcpt=1 (queue active)  Apr  5 09:09:19 mail postfix/qmgr[29044]: B30A82645F9: removed  Apr  5 09:09:19 mail postfix/local[29054]: BAA252645FC: to=<www-data@MYDOMAIN.com>, relay=local, delay=0.02, delays=0.01/0/0/0.01, dsn=2.0.0, status=sent (delivered to mailbox)  Apr  5 09:09:19 mail postfix/qmgr[29044]: BAA252645FC: removed  

Mails to gmail and so on are still working without errors. I guess he's still trying to attach the mail to the localhost instead passing it over to the mail server?

UPDATE 2:

I now changed mydestination to localhost since this seems to be right? Now the relay is the mailserver but 2 new errors show up:

Apr  5 10:18:45 mail postfix/pickup[29400]: 4BB7F264615: uid=33 from=<www-data>  Apr  5 10:18:45 mail postfix/cleanup[29425]: 4BB7F264615: message-id=<20180405081845.4BB7F264615@mail.MYDOMAIN.com>  Apr  5 10:18:45 mail postfix/qmgr[29401]: 4BB7F264615: from=<www-data@MYDOMAIN.com>, size=426, nrcpt=1 (queue active)  Apr  5 10:18:48 mail postfix/smtp[29426]: 4BB7F264615: to=<USER@MYDOMAIN.com>, relay=mail.MYDOMAIN.com[SERVER2]:25, delay=3.1, delays=0.02/0/3.1/0, dsn=5.0.0, status=bounced (host mail.MYDOMAIN.com[SERVER2] said: 550-Verification failed for <www-data@MYDOMAIN.com> 550-Unrouteable address 550 Sender verify failed (in reply to MAIL FROM command))  Apr  5 10:18:48 mail postfix/cleanup[29425]: 73183264624: message-id=<20180405081848.73183264624@mail.MYDOMAIN.com>  Apr  5 10:18:48 mail postfix/bounce[29438]: 4BB7F264615: sender non-delivery notification: 73183264624  Apr  5 10:18:48 mail postfix/qmgr[29401]: 73183264624: from=<>, size=2611, nrcpt=1 (queue active)  Apr  5 10:18:48 mail postfix/qmgr[29401]: 4BB7F264615: removed  Apr  5 10:18:51 mail postfix/smtp[29426]: 73183264624: to=<www-data@MYDOMAIN.com>, relay=mail.MYDOMAIN.com[SERVER2]:25, delay=3, delays=0.01/0/3/0.02, dsn=5.0.0, status=bounced (host mail.MYDOMAIN.com[SERVER2] said: 550 Unroutable address (in reply to RCPT TO command))  Apr  5 10:18:51 mail postfix/qmgr[29401]: 73183264624: removed   

Powershell take ownership of AD Objects

Posted: 01 Aug 2021 09:07 PM PDT

I have a list of users, computer, and groups that random people are the owner of in AD. I'd like to clean them up for security reasons and just make domain admins the owner for all these objects. Can someone assist with a powershell script for this?

I google searched without any luck. I found this old code but it doesn't seem to work, keep getting an error for the owner. Running as a domain admin, win10 machine.

Param (  [parameter(Position=0,Mandatory=$true,ValueFromPipeline=$true)][string]$Identity,  [parameter(Position=1,Mandatory=$true,ValueFromPipeline=$true)][string]$Owner  )    try {    $oADObject = Get-ADObject -Filter { (Name -eq $Identity) -or (DistinguishedName -eq $Identity) };    $oAceObj   = Get-Acl -Path ("ActiveDirectory:://RootDSE/" + $oADObject.DistinguishedName);  } catch {    Write-Error "Failed to find the source object.";    return;  }    try {    $oADOwner   = Get-ADObject -Filter { (Name -eq $Owner) -or (DistinguishedName -eq $Owner) };    $oNewOwnAce = New-Object System.Security.Principal.NTAccount($oADOwner.Name);  } catch {    Write-Error "Failed to find the new owner object.";    return;  }    try {    $oAceObj.SetOwner($oNewOwnAce);    Set-Acl -Path ("ActiveDirectory:://RootDSE/" + $oADObject.DistinguishedName) -AclObject $oAceObj;  } catch {    $errMsg = "Failed to set the new new ACE on " + $oADObject.Name;    Write-Error $errMsg;  }  

e.g. Running .\set-adowner.ps1 -Identity "RANDOMUSER" -Owner "domain admins"

Also would like to have it run through a txt file with all the objects' samaccountnames, once I get the base script running.

Thank you for any help, Fred

Segfault error 4 in libmysqlclient.so

Posted: 01 Aug 2021 04:05 PM PDT

Since a few days, I see this kind of messages in my syslog:

Sep 23 14:28:42 server kernel: [138926.637593] php5-fpm[6455]: segfault at 7f9ade735018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:28:44 server kernel: [138928.314016] php5-fpm[22742]: segfault at 7f9ade3db018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:32:11 server kernel: [139135.318287] php5-fpm[16887]: segfault at 7f9ade4b3018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:32:49 server kernel: [139173.050377] php5-fpm[668]: segfault at 7f9ade61a018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:33:19 server kernel: [139203.396935] php5-fpm[26277]: segfault at 7f9ade6c0018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:35:06 server kernel: [139310.048740] php5-fpm[27017]: segfault at 7f9ade46c018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:35:19 server kernel: [139323.494188] php5-fpm[31263]: segfault at 7f9ade5e2018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:36:10 server kernel: [139374.904308] php5-fpm[26422]: segfault at 7f9ade6cf018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:37:25 server kernel: [139449.360384] php5-fpm[20806]: segfault at 7f9ade644018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]   

I'm using debian 8 and MariaDB. In the beginning it was only once every 2 or 3 hours, but now it's several times per hours. After some research, I understand as it should be an memory problem, but I didn't find any solution to solve it.

This is what I see in mysqltuner:

-------- Storage Engine Statistics -------------------------------------------  [--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM   [--] Data in InnoDB tables: 2G (Tables: 79)  [--] Data in MyISAM tables: 96M (Tables: 146)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)  [!!] Total fragmented tables: 34    -------- Security Recommendations  -------------------------------------------  [OK] All database users have passwords assigned    -------- Performance Metrics -------------------------------------------------  [--] Up for: 1d 16h 44m 38s (73M q [502.853 qps], 196K conn, TX: 572B, RX: 14B)  [--] Reads / Writes: 97% / 3%  [--] Total buffers: 17.3G global + 56.2M per thread (500 max threads)  [!!] Maximum possible memory usage: 44.8G (142% of installed RAM)  [OK] Slow queries: 0% (2K/73M)  [OK] Highest usage of available connections: 28% (141/500)  [OK] Key buffer size / total MyISAM indexes: 1.0G/32.6M  [OK] Key buffer hit rate: 100.0% (132M cached / 53K reads)  [OK] Query cache efficiency: 44.9% (50M cached / 113M selects)  [!!] Query cache prunes per day: 260596  [OK] Sorts requiring temporary tables: 0% (2K temp sorts / 2M sorts)  [OK] Temporary tables created on disk: 21% (6K on disk / 28K total)  [OK] Thread cache hit rate: 99% (141 created / 196K connections)  [OK] Table cache hit rate: 72% (500 open / 692 opened)  [OK] Open file limit used: 17% (429/2K)  [OK] Table locks acquired immediately: 99% (25M immediate / 25M locks)  [OK] InnoDB buffer pool / data size: 16.0G/2.4G  [!!] InnoDB log waits: 30  

So the maximum memory to use is too high, but I ajusted my innodb buffer pool size to 16Go, for 32Go RAM that should be ok, I don't know what to do for optimize this.

The thing is, my memory general usage in the server is always under 89% (plus for caching). MySQL is actually using 50,6% of RAM. I don't know if there is link between all of this, but I prefere to put it here. Otherwise, everything seems to be ok in MySQL side...

Finally this the principal variables in my.cnf I adjusted which could have an effect on this:

max_connections = 100  max_heap_table_size = 64M  read_buffer_size = 4M  read_rnd_buffer_size = 32M  sort_buffer_size = 8M  query_cache_size = 256M  query_cache_limit = 4M  query_cache_type = 1  query_cache_strip_comments =1  thread_stack = 192K  transaction_isolation = READ-COMMITTED  tmp_table_size = 64M  nnodb_additional_mem_pool_size = 16M  innodb_buffer_pool_size = 16G  thread_cache_size   = 4M  max_connections     = 500  join_buffer_size    = 12M  interactive_timeout = 30  wait_timeout        = 30  open_files_limit    = 800  innodb_file_per_table  key_buffer_size     = 1G  table_open_cache    = 500  innodb_log_file_size    = 256M  

Two days ago the server crashed for no reason in syslog execpt the segfault. Can segfault crash the system ? Any Idea for the segfault reason ? A few ways to understand the origin of the problem ?

MariaDB optimal configuration on CentOS

Posted: 01 Aug 2021 06:04 PM PDT

I have a cloud server with the following config:

CPU:8 vCore (4 Core, 4 Processor)  RAM:16 GB  SSD:240 GB  

This server features CentOS7-64bit with MariaDB running as the database. (Date bug is fixed)

The server is very database intensive and updated near constantly.

I am seeing > 95% utilization on the server but I believe the default configuration is more to blame then running out of resources.

I'd like to offload more of the work to the RAM which is only 11% used by MySQL/MariaDB.

41724 mysql     20   0 10.163g 1.765g   9704 S 298.0 11.3  19:10.43 mysqld  

Using this Serverfault questions I've adjusted my Database config file as best I could.

This has improved some but is there anything else I can do to improve the performance outside of upgrading CPU?

[mysqld]  bind-address = ::  skip_name_resolve  local-infile=0  datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  key_buffer_size=4G  max_allowed_packet=128M  query_cache_type = 1  query_cache_limit = 100M  query_cache_min_res_unit = 2k  query_cache_size = 784M  tmp_table_size= 2048M  max_heap_table_size= 2048M  skip-name-resolve  innodb_buffer_pool_size= 7G  innodb_file_per_table=1  

Linux foo-over-udp tunnel creation issues

Posted: 01 Aug 2021 06:18 PM PDT

I want to create fou(foo-over-udp) tunnel on linux 4.4.10 using iproute2 4.5.0 and while trying to create a tunnel i get the following error:

sudo ip link add dev tun0 type ipip remote 172.19.0.9 local 172.19.0.8 encap fou encap-sport auto encap-dport 4444  RTNETLINK answers: Invalid argument  

While this usually indicates, that i provided wrong arguments, device still gets created with a wierd name and unconfigured:

10: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT  group default qlen 1       link/ipip 0.0.0.0 brd 0.0.0.0  

I should also mention that fou and ipip kernel modules are loaded:

fou                     9093  0  ip6_udp_tunnel          1967  1 fou  udp_tunnel              2547  1 fou  ipip                    5260  0  ip_tunnel              13315  1 ipip  tunnel4                 2389  1 ipip  

I didn't try with any other kernel version, but i did try with latest version of iproute2. I should also mention that i was trying this setup in linux network namespace, but i get the same problem if i do it outside of the namespace. What could be causing this issue, and is there any other alternative to configure fou tunnel? By my observations iproute2 is problematic one.

How to setup health check on multiple ports in new AWS Application LB?

Posted: 01 Aug 2021 05:04 PM PDT

Can someone help me setting up the multi-port health check on single AWS ALB.

My instances have been running with dockerized containers on different ports. ALB Scheme is internal LB.

Please help.

Locked LVM logical volume in VG with missing PV

Posted: 01 Aug 2021 08:06 PM PDT

So, I had disk failure and was moving LVs from the failing disk, to the new PVs. Some LVs were moved successfully, some not. Afterwards I ended up with following state: - two locked LVs - volume group with missing PV

When I try to remove PV, I get:

vgreduce --removemissing --force vg3    Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.    Removing partial LV var.    Can't remove locked LV var    lvremove -fff vg3/var    Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.    Can't remove locked LV var    pvmove --abort    Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.    Cannot change VG vg3 while PVs are missing.    Consider vgreduce --removemissing.    Skipping volume group vg3  

I also tried executing vcfgbackup and then restore after editing the locks out, but to no avail:

vgcfgrestore --force vg3    Couldn't find device with uuid RQr0HS-17ts-1k6Y-Xnex-IZwi-Y2kM-vCc5mP.    Cannot restore Volume Group vg3 with 1 PVs marked as missing.    Restore failed.  

So I went even further, and inserted the disk back - its failed, but it is detectabled for a bit.

vgreduce --removemissing vg3    /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error    /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error    WARNING: Partial LV var needs to be repaired or removed.    WARNING: Partial LV pvmove1 needs to be repaired or removed.    There are still partial LVs in VG vg3.    To remove them unconditionally use: vgreduce --removemissing --force.    Proceeding to remove empty missing PVs.    lvremove -fff vg3/var    /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error    /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error    Can't remove locked LV var    pvmove --abort    /dev/vg3/var: read failed after 0 of 4096 at 9638445056: Input/output error    /dev/vg3/var: read failed after 0 of 4096 at 9638502400: Input/output error    Cannot change VG vg3 while PVs are missing.    Consider vgreduce --removemissing.    Skipping volume group vg3  

And this is the moment, at which I am out of ideas.

cannot login to mysql root from vagrant, but can from sudo

Posted: 01 Aug 2021 07:03 PM PDT

I just reinstall mysql, which is now running 5.7.9. I'm using vagrant.

I try to login to mysql with mysql -u root -proot and I get: ERROR 1698 (28000): Access denied for user 'root'@'localhost'

I then ran sudo -i, and mysql -u root -proot, and it allowed me to login with full root access.

Any idea why this is and how do I fix it? is the root user secured in some way on newer versions of mysql that make it inaccessible unless you're a sudo/root user in ubuntu?

Windows Task Scheduler don't show execution errors

Posted: 01 Aug 2021 08:01 PM PDT

I'm trying to make a powerscript that will run from time to time through the windows task scheduler.

My problem is that the task scheduler do not display the task as failed, even though my script do fail.

This is the event that i get:

Level   Date and Time   Event ID    Task Category   Operational Code    Correlation Id  Information 15.03.2016 22:53:06 102 Task completed  (2) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler successfully finished ""{F03232D8-4196-4425-88A9-722028F9700A}"" instance of the ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" task for user ""VEGAR-M4800\vegar""."  Information 15.03.2016 22:53:06 201 Action completed    (2) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler successfully completed task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" , instance ""{F03232D8-4196-4425-88A9-722028F9700A}"" , action ""StartPowerShellJob"" with return code 0."  Information 15.03.2016 22:53:03 200 Action started  (1) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler launched action ""StartPowerShellJob"" in instance ""{F03232D8-4196-4425-88A9-722028F9700A}"" of task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task""."  Information 15.03.2016 22:53:03 100 Task Started    (1) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler started ""{F03232D8-4196-4425-88A9-722028F9700A}"" instance of the ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" task for user ""VEGAR-M4800\vegar""."  Information 15.03.2016 22:53:03 129 Created Task Process    Info    00000000-0000-0000-0000-000000000000    "Task Scheduler launch task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" , instance ""powershell.exe""  with process ID 12780."  Information 15.03.2016 22:53:03 107 Task triggered on scheduler Info    f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler launched ""{F03232D8-4196-4425-88A9-722028F9700A}""  instance of task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" due to a time trigger condition."  

The Result.xml shows another story:

    <Results_Error z:Assembly="0" z:Id="61" z:Type="System.Collections.ObjectModel.Collection`1[[System.Management.Automation.ErrorRecord, System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]">    <items xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation" z:Assembly="0" z:Id="62" z:Type="System.Collections.Generic.List`1[[System.Management.Automation.ErrorRecord, System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]">      <_items z:Id="63" z:Size="4">        <ErrorRecord z:Assembly="System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" z:Id="64" z:Type="System.Management.Automation.Runspaces.RemotingErrorRecord">          <CliXml xmlns="" z:Assembly="0" z:Id="65" z:Type="System.String">&lt;Objs Version=&quot;1.1.0.1&quot; xmlns=&quot;http://schemas.microsoft.com/powershell/2004/04&quot;&gt;    &lt;Obj RefId=&quot;0&quot;&gt;    &lt;ToString&gt;@{Exception=System.Management.Automation.RemoteException: Could not find a part of the path 'C:\source\flis\'.; TargetObject=; FullyQualifiedErrorId=FileOpenFailure,Microsoft.PowerShell.Commands.OutFileCommand; InvocationInfo=; ErrorCategory_Category=1; ErrorCategory_Activity=Out-File; ErrorCategory_Reason=DirectoryNotFoundException; ErrorCategory_TargetName=; ErrorCategory_TargetType=; ErrorCategory_Message=OpenError: (:) [Out-File], DirectoryNotFoundException; SerializeExtendedInfo=False; ErrorDetails_ScriptStackTrace=at &amp;lt;ScriptBlock&amp;gt;, &amp;lt;No file&amp;gt;: line 1_x000D__x000A_at Do-Backup, &amp;lt;No file&amp;gt;: line 66_x000D__x000A_at &amp;lt;ScriptBlock&amp;gt;, &amp;lt;No file&amp;gt;: line 83}&lt;/ToString&gt;  ....  </CliXml>          <RemoteErrorRecord_OriginInfo xmlns="" z:Assembly="System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" z:Id="66" z:Type="System.Management.Automation.Remoting.OriginInfo">            <_computerName xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting" z:Id="67">localhost</_computerName>            <_instanceId xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting">00000000-0000-0000-0000-000000000000</_instanceId>            <_runspaceID xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting">975a8cbe-dbd1-43c0-80b4-19c282eee381</_runspaceID>          </RemoteErrorRecord_OriginInfo>        </ErrorRecord>        <ErrorRecord i:nil="true"/>        <ErrorRecord i:nil="true"/>        <ErrorRecord i:nil="true"/>      </_items>      <_size>1</_size>      <_version>1</_version>    </items>  </Results_Error>  

I have created the task it self by invoking the following inside a scipt:

function Create-Task  {      Param($name, $description)        $trigger = New-JobTrigger -Daily -At 1am      $options = New-ScheduledJobOption -WakeToRun -RequireNetwork        Register-ScheduledJob -Name $name -FilePath $MyInvocation.PSCommandPath -ArgumentList "Run", $MyInvocation.PSScriptRoot -Trigger $trigger -ScheduledJobOption $options  }  

Any idea why I don't get a nice red error ?

SSL: couldn't read X509 certificate from pem file lighttpd

Posted: 01 Aug 2021 04:05 PM PDT

I have installed ssl key in the past on my lighttpd and I've documented the steps I did to make sure that I can replicate it in the future. Well, I haven't touched the server for a while and I need to create a new web server with lighttpd that will support ssl. Follow all my steps in my note and it gave me this error

SSL: couldn't read X509 certificate from PEM file  

I'm not sure what I missed in my steps but if someone could please take a look at my steps and perhaps suggest what I missed, I'd really appreciate it.

This is my environment

CentOS 6.4 64 bit  lighttpd/1.4.35 (ssl) - a light and fast webserver  

My ssl certificate is from startcom company

Here are my steps

Generate my csr

openssl req -new -newkey rsa:4096 -nodes -out myserver.csr -keyout myserver_privatekey.key -subj "/C=us/ST=State/L=City/O=MyCompany/OU=Productions/CN=myserver.mycompany.net"  

Send the csr to Startcom and get this ssl save it as

myserver.crt  

Create the final PEM file

cat myserver_privatekey.key myserver.crt > myserver.pem  

Got these 2 files from startcom

ca.pem   sub.class1.server.ca.pem  

Unified those 2 files

cat ca.pem sub.class1.server.ca.pem >> ca-certs.crt  

Move the crt and pem file to myssl directory

Here is my lighttpd configuration:

$SERVER["socket"] == "0.0.0.0:443" {          ssl.engine  = "enable"          ssl.ca-file = "/etc/lighttpd/myssl/ca-certs.crt"          ssl.pemfile = "/etc/lighttpd/myssl/myserver.pem"    }    $SERVER["socket"] == "[::]:443" {          ssl.engine  = "enable"          ssl.ca-file = "/etc/lighttpd/myssl/ca-certs.crt"          ssl.pemfile = "/etc/lighttpd/myssl/myserver.pem"    }    $HTTP["host"] =~ "^(myserver\.)?mycompany\.net$" {          ssl.ca-file = "/etc/lighttpd/myssl/ca-certs.crt"          ssl.pemfile = "/etc/lighttpd/myssl/myserver.pem"      server.document-root = "/var/www/lighttpd/mywebsite"    }  

So when I'm done, I restarted my lighttpd and this is the error I got.

Starting lighttpd: 2015-09-20 15:58:32: (network.c.543) SSL: couldn't read X509 certificate from '/etc/lighttpd/myssl/myserver.pem'  

I've either never or haven't seen that error in the past so I'm not quite sure how to move forward from there. Can anyone give me your 2 cents on what I missed? Please help?

Exchange 2013 with two wildcard certificates for services

Posted: 01 Aug 2021 05:04 PM PDT

We've an Exchange Server with two different DNS names pointing to the same server, the internal and the external name. Something like this:

exchange.domain.com [External Domain]  exchange1.local.domain.com [Internal Domain]  

So there are two wildcard certificates for those domains: *.local.domain.com and *.domain.com.

The problem happens when users come to the Exchange Server through the internal domain name. Since I was only able to use the *.domain.com certificate for IIS, I cannot match the internal certificate from clients coming with the internal DNS name.

The question is basically how to match the DNS domain name with the corresponding certificate? Since we can't reissue those certificates to have different SANs this isn't an option.

Thanks in advance.

Configure AJP Connector for enabling SSL in Eclipse Tomcat

Posted: 01 Aug 2021 08:01 PM PDT

I am newbie to server side and would like to how to configure AJP connector in eclipse's tomcat in linux environment to enable SSL.

I want to enable SSL in tomcat without using keystore. I created key.pem, cert.pem and cert.crt. And dont no what to do next. Can any body help? Used the following commands:

openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 999    openssl x509 -outform der -in cert.pem -out cert.crt  

Server.xml

<Connector             port="8443" maxThreads="200"            scheme="https" secure="true" SSLEnabled="true"            SSLCertificateFile="/root/keyTest/cert.pem"             SSLCertificateKeyFile="/root/keyTest/key.pem"            clientAuth="optional" SSLProtocol="TLSv1"/>  

But got an error

SEVERE: Failed to load keystore type JKS with path /root/.keystore due to /root/.keystore (No such file or directory)      java.io.FileNotFoundException: /root/.keystore (No such file or directory)          at java.io.FileInputStream.open(Native Method)          at java.io.FileInputStream.<init>(FileInputStream.java:146)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:413)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:319)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:577)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:517)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:462)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:209)          at org.apache.tomcat.util.net.JIoEndpoint.bind(JIoEndpoint.java:398)          at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:646)          at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:434)          at org.apache.coyote.http11.AbstractHttp11JsseProtocol.init(AbstractHttp11JsseProtocol.java:119)          at org.apache.catalina.connector.Connector.initInternal(Connector.java:978)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.startup.Catalina.load(Catalina.java:638)          at org.apache.catalina.startup.Catalina.load(Catalina.java:663)          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)          at java.lang.reflect.Method.invoke(Method.java:606)          at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:280)          at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:454)        May 12, 2015 3:58:56 PM org.apache.coyote.AbstractProtocol init      SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-8443"]      java.io.FileNotFoundException: /root/.keystore (No such file or directory)          at java.io.FileInputStream.open(Native Method)          at java.io.FileInputStream.<init>(FileInputStream.java:146)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:413)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:319)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:577)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:517)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:462)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:209)          at org.apache.tomcat.util.net.JIoEndpoint.bind(JIoEndpoint.java:398)          at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:646)          at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:434)          at org.apache.coyote.http11.AbstractHttp11JsseProtocol.init(AbstractHttp11JsseProtocol.java:119)          at org.apache.catalina.connector.Connector.initInternal(Connector.java:978)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.startup.Catalina.load(Catalina.java:638)          at org.apache.catalina.startup.Catalina.load(Catalina.java:663)          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)          at java.lang.reflect.Method.invoke(Method.java:606)          at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:280)          at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:454)        May 12, 2015 3:58:56 PM org.apache.catalina.core.StandardService initInternal      SEVERE: Failed to initialize connector [Connector[HTTP/1.1-8443]]      org.apache.catalina.LifecycleException: Failed to initialize component [Connector[HTTP/1.1-8443]]          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:106)          at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          at org.apache.catalina.startup.Catalina.load(Catalina.java:638)          at org.apache.catalina.startup.Catalina.load(Catalina.java:663)          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)          at java.lang.reflect.Method.invoke(Method.java:606)          at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:280)          at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:454)      Caused by: org.apache.catalina.LifecycleException: Protocol handler initialization failed          at org.apache.catalina.connector.Connector.initInternal(Connector.java:980)          at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)          ... 12 more      Caused by: java.io.FileNotFoundException: /root/.keystore (No such file or directory)          at java.io.FileInputStream.open(Native Method)          at java.io.FileInputStream.<init>(FileInputStream.java:146)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:413)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:319)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:577)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:517)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:462)          at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:209)          at org.apache.tomcat.util.net.JIoEndpoint.bind(JIoEndpoint.java:398)          at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:646)          at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:434)          at org.apache.coyote.http11.AbstractHttp11JsseProtocol.init(AbstractHttp11JsseProtocol.java:119)          at org.apache.catalina.connector.Connector.initInternal(Connector.java:978)          ... 13 more  

mod_rewrite in alias directory not working

Posted: 01 Aug 2021 06:04 PM PDT

I have a wp blog which lives under /var/www/mysite.com/blog and is accessible via www.mysite.com/blog

When visiting www.mysite.com/blog/ I see the index page, but all links on subdirectories are no longer working eg: www.mysite.com/blog/my-great-article is throwing (apache log):

File does not exist: /var/www/mysite.com/web/blog/my-great-article,        referer: http://www.mysite.com/blog/  

vhost:

Alias /blog "/var/www/mysite.com/web/blog"  <Location "/blog">      Allow from all      AddType application/x-httpd-php php php4 php3 html htm  </Location>  

.htacess in /blog dir:

# BEGIN WordPress  <IfModule mod_rewrite.c>  RewriteEngine On  RewriteBase /blog/  RewriteRule ^index\.php$ - [L]  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule . /blog/index.php [L]  </IfModule>    # END WordPress  

How can I fix that?

No comments:

Post a Comment