Saturday, August 21, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Can I start gcloud shell in a specific region or zone?

Posted: 21 Aug 2021 08:33 PM PDT

It seems that running gcloud cloud-shell --project project-name always seems to start the shell instance near me rather than in the default region/zone for the project.

Is there any way to specify which region/zone to start the gcloud-shell instance?

Docker for Windows 10 Home error:Hardware assisted virtualization and data execution protection must be enabled in the BIOS

Posted: 21 Aug 2021 08:20 PM PDT

I want to use Docker in my Windows 10 Home Surface Pro 7 PC and the many sites say I should install wsl2 and then install Docker Desktop for Windows.

After the process, I launched Docker Desktop and I get this error message.

Hardware assisted virtualization and data execution protection must be enabled in the BIOS. See https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization  

troubleshoot page says for Windows10 Home, I have to launch Windows Feature and check the Virtual Machine Platform and Windows Subsystem for Linux.(Instead of Hyper-V for Windows10 Pro)

I checked these item, but Docker Desktop still returns the same error

And this is surface pro 7 PC which does not have any item for virtualization on UEFI.

How can I use Docker on my PC?

Any information would be appreciated.

AMD chipset drivers for Windows Server and HyperV Core

Posted: 21 Aug 2021 07:39 PM PDT

I'm struggling to install AMD chipset drivers (X570 & B550) for Windows Hyper-V Core 2019. The Win10 x64 drivers install on "Windows Server 2019 Standard" (Desktop Experience) but I have not found an .inf file to install the driver using pnputil from terminal. (needed for Server Core and HyperV core)

I have not found any documentation saying that those chipsets do not support Windows Server but AMD has only released a Win10-64 driver that fails to install on HyperV Core.

My usual trick of extracting the drivers from the Win10 driver executable has been unsuccessful, apparently that was discontinued after v19.10.0429. The currently available driver is AMD_Chipset_Software_win10_2.17.25.506.exe and the *.inf driver appears to be hidden in a manifest file. Attempting to install that executable hits this error:

If I extract the AMD Win10 driver as far as possible and attempt to install it from terminal I see this error.

Error message when installing Win10 driver executable on Server Core

An alternative I considered was migrating to Server Desktop Experience (GUI) to install the drivers then reverting to Hyper-V Core. This is no longer available in server 2019 (discontinued after 2016).

Another option could be to install Server GUI on the same hardware, install the drivers and extract them using "pnputil /export-driver <oem#.inf | *> " before importing them using pnputil on the target operating system.

Is there something obvious I'm missing? How does everyone else do this?

Thanks!

How to force kill a frozen service running on Windows as a non administrator?

Posted: 21 Aug 2021 07:38 PM PDT

Is there a way to force kill a frozen process that is set up as a service and logged in with a service account with the same service account?

What I am trying to do here is setup a watchdog service in Windows task scheduler and it would check to see if the service has frozen or not. Once determined to be frozen, it will call the taskkill command to force kill the service (net stop or stop-process/stop-service -force would not successfully kill the task after timing out). However, I keep running into access is denied permissions issues.

While researching, I found out that I had to modify the SDDL permissions based on a previous question/answer asked here for each service. However, taskkill permissions appears to be unaffected. I can taskkill a service as long as the service is owned by the same account (did a live test in command prompt using the runas command on the service account) however, while the service I manage through Windows Service manager is configured to log in as the service account, I get access is denied when trying to run taskkill on that service with the service account.

I'm hoping there is a way to do this without giving administrator permissions to a service account where the purpose is solely to only manage those services.

Reference existing resources in cloud formation

Posted: 21 Aug 2021 06:39 PM PDT

Is there a way to reference an existing resource in cloudformation. I am looking for something similar to terraform's datasource facility where I can find a resource by tag, etc. and then use a property such as Id.

I have an existing security group with a consistent name across accounts. If I could look up this SG in the template I could use the ID.

Azure does this. terraform does this.

Apache2 in docker using IPv6

Posted: 21 Aug 2021 03:54 PM PDT

I'm trying to configure IPv6 on my docker container. I want to expose port 80 on my IPv6. But my website still not working on IPv6. How to check where is the problem? Maybe someone can find it in my config files:

docker-compose.yml:

version: '3'  services:      web:          container_name: ci4          build:              context: ./docker          ports:              - 80:80          volumes:              - ./:/var/www/html  

Dockerfile

FROM php:7.4-apache  RUN apt-get update && apt-get install -y    COPY 000-default.conf /etc/apache2/sites-available/  COPY ports.conf /etc/apache2/    RUN a2enmod rewrite  RUN service apache2 restart  

ports.conf:

Listen [::]:80  

000-default.conf

<VirtualHost [::]:80>      DocumentRoot "/var/www/html/public/"      ServerName localhost      <Directory "/var/www/html/public/">          AllowOverride all      </Directory>  </VirtualHost>  

Thanks for any help.

Rewrite Rule for URL with query does not work

Posted: 21 Aug 2021 08:47 PM PDT

I write the following rewrite rule:

<IfModule mod_rewrite.c>  RewriteCond %{QUERY_STRING} rp=\/knowledgebase\/.*  RewriteRule ^\/customer\/index.php /knowledgebase/ [R=301,L]  RewriteRule ^\/customer\/knowledgebase\.php$ /knowledgebase/ [R=301,L,QSA]  </IfModule>  

To redirect URL such as

https://www.example.com/customer/index.php?rp=/knowledgebase/5/DataNumen-Excel-Repair to https://www.example.com/knowledgebase/

And redirect URL such as

https://www.example.com/customer/knowledgebase.php to https://www.example.com/knowledgebase/

But both do not work. Why?

Update

I try to put MrWhite's codes from /.htaccess to /customer/.htaccess and make some minor changes to adopt the changes, as below:

RewriteCond %{QUERY_STRING} rp=/knowledgebase/  RewriteRule ^index\.php$ https://www.example.com/knowledgebase/ [QSD,R=301,L,NC]    RewriteRule ^knowledgebase\.php$ https://www.example.com/knowledgebase/ [R=301,L,NC]  

Now the redirect works. However, it will only work for case like:

https://www.example.com/customer/index.php?rp=/knowledgebase/9/DataNumen-PDF-Repair

but for case like

https://www.example.com/customer/index.php?a=b&c=d&rp=/knowledgebase/9/DataNumen-PDF-Repair

It will not work. Even after I change ^rp= to rp in RewriteCond.

Fail2Ban blocks ip despite both ignoreself and ignoreip being set in jail.local

Posted: 21 Aug 2021 04:50 PM PDT

solution: The IP range in ignoreip was set incorrectly using CIDR. It should have been 192.168.2.0/24 rather than 192.168.2.1/32.

original post:

Another user had a similar problem caused by conflicting ignoreip's (jail.local's ignoreip replacing jail.conf's). However, the only ignoreip I am using is the one in jail.local, and I have not edited jail.conf at all, so the user's solution did not apply for me.

I've made the following changes in jail.local:

>diff /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

87c87
< #ignoreself = true
---
> ignoreself = true
92c92
< #ignoreip = 192.168.2.0/255
---
> ignoreip = 192.168.2.1/32 ::1
101c101
< bantime = 10m
---
> bantime = -1
208c208
< banaction = iptables-multiport
---
> banaction = iptables-allports

All of these changes are within the [DEFAULT] block.

I've >sudo /etc/init.d/fail2ban restart'ed many times, and >sudo shutdown -r 0'd many times as well. Despite this, every time I try to intentionally fail ssh logins from 192.168.2.13, the IP gets blocked after 5 tries. After this, I have to manually unban it using >sudo fail2ban-client set sshd unbanip 192.168.2.13.

>tail /var/log/fail2ban.log

2021-08-20 21:43:57,190 fail2ban.jail [1703]: INFO Jail 'sshd' started
2021-08-20 21:44:04,082 fail2ban.filter [1703]: INFO [sshd] Found 192.168.2.13 - 2021-08-20 21:44:03
2021-08-20 21:44:05,792 fail2ban.filter [1703]: INFO [sshd] Found 192.168.2.13 - 2021-08-20 21:44:05
2021-08-20 21:44:10,357 fail2ban.filter [1703]: INFO [sshd] Found 192.168.2.13 - 2021-08-20 21:44:09
2021-08-20 21:44:15,613 fail2ban.filter [1703]: INFO [sshd] Found 192.168.2.13 - 2021-08-20 21:44:15
2021-08-20 21:44:19,166 fail2ban.filter [1703]: INFO [sshd] Found 192.168.2.13 - 2021-08-20 21:44:19
2021-08-20 21:44:19,216 fail2ban.actions [1703]: NOTICE [sshd] Ban 192.168.2.13

Duplicating jail.local (with the appropriate ignoreip, etc.) as jail.conf also did not work.

Any ideas?

Using apache reverse proxy to send all requests for /blog to internal wordpress server

Posted: 21 Aug 2021 04:01 PM PDT

I have a website written in react, and now I wanted to add a blog section to the site. The blog is going to be based on wordpress.

The react app runs in a docker container, and I use the wordpress docker container to run the wordpress blog.

In order to access the website, I use another container running apache and acting as a reverse proxy.

Inside the httpd.conf file for the apache container, I have the following section:

<VirtualHost *:80>      <Location "/">          ProxyPreserveHost On          ProxyPass "${REACT_SERVER}/"          ProxyPassReverse "${REACT_SERVER}/"      </Location>        <Location /blog>          ProxyPreserveHost On          ProxyPass "${BLOG_SERVER}/"          ProxyPassReverse "${BLOG_SERVER}/"          ProxyPassReverseCookiePath  "/"  "/blog"      </Location>        # more config for handling websockets  </VirtualHost>  

The variables REACT_SERVER and BLOG_SERVER come from the environment.

The problem I'm having is that when I try to access the blog, apache successfully redirects my request to the internal wordpress site, but when wordpress does its own redirect, it uses the same host as apache, but the path does not start with /blog, so my react app tries to handle the request, but eventually gives up and does its own redirect to the home page.

Here is an example using curl:

➜ curl -v http://localhost:3005/blog/  *   Trying 127.0.0.1:3005...  * Connected to localhost (127.0.0.1) port 3005 (#0)  > GET /blog/ HTTP/1.1  > Host: localhost:3005  > User-Agent: curl/7.74.0  > Accept: */*  >  * Mark bundle as not supporting multiuse  < HTTP/1.1 302 Found  < Date: Fri, 20 Aug 2021 16:27:32 GMT  < Server: Apache/2.4.48 (Debian)  < X-Powered-By: PHP/7.4.22  < Expires: Wed, 11 Jan 1984 05:00:00 GMT  < Cache-Control: no-cache, must-revalidate, max-age=0  < X-Redirect-By: WordPress  < Location: http://localhost:3005/wp-admin/install.php  < Content-Length: 0  < Content-Type: text/html; charset=UTF-8  <  * Connection #0 to host localhost left intact  

As you can see, after the X-Redirected-By section, the Location starts with /wp-admin instead of /blog/wp-admin.

From the docs on ProxyPassReverse:

For example, suppose the local server has address http://example.com/; then

ProxyPass         "/mirror/foo/" "http://backend.example.com/"  ProxyPassReverse  "/mirror/foo/" "http://backend.example.com/"  ProxyPassReverseCookieDomain  "backend.example.com" "public.example.com"  ProxyPassReverseCookiePath  "/"  "/mirror/foo/"  

will not only cause a local request for the http://example.com/mirror/foo/bar to be internally converted into a proxy request to http://backend.example.com/bar (the functionality which ProxyPass provides here). It also takes care of redirects which the server backend.example.com sends when redirecting http://backend.example.com/bar to http://backend.example.com/quux . Apache httpd adjusts this to http://example.com/mirror/foo/quux before forwarding the HTTP redirect response to the client. Note that the hostname used for constructing the URL is chosen in respect to the setting of the UseCanonicalName directive.

and it seems that this is all that's required for this to work, but it still doesn't.

And if you are wondering, yes I have tried the plain (without the Location directive):

ProxyPass "/blog/" "${BLOG_SERVER}/"  ProxyPassReverse "/blog/" "${BLOG_SERVER}/"  ProxyPassReverseCookiePath  "/"  "/blog"    # etc...  

And I also get the same results.

What am I missing?

IIS stop bogus API

Posted: 21 Aug 2021 06:10 PM PDT

In IIS can it stop bogus API calls? Yesterday I got flooded with something that was trying to see if a page is on the site. They got the 404 but the application still had to check to see if that was a good page in the application. Can IIS stop this or will the web application need to process it and stop it. Is there a section in IIS where I can add the bogus path to to stop this? would this help https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/denyurlsequences/ or Reverse Proxy using IIS Rewrite It would only pass the traffic that's setup?

Bogus API calls

 The controller for path '/bitrix/admin/' was not found      The controller for path '/cgi-bin/webcm'      The controller for path '/admin' was not found      The controller for path '/system/login'      The controller for path '/typo3/phpmyadmin/'  

App Log file

 2021-08-17 15:05:28,382 [16] ERROR HTI.LogServices.Implementation.Log4NetHelper - [undefined]: Unhandled Exception (System.Web.HttpException (0x80004005): The controller for path '/admin' was not found or does not implement IController.         at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType)  

AWS: How to redirect many domains to a page on another domain?

Posted: 21 Aug 2021 06:20 PM PDT

My objective

I have a number of domains (e.g. 10 or 20) and I would like to redirect any visitors to anywhere on those pages to one page on another domain (for example my stackoverflow.com profile page).

This includes

  1. apex domain using http (e.g. http://mydomain01.com)
  2. apex domain using https (e.g. https://mydomain01.com)
  3. sub domains using http (e.g. http://www.mydomain01.com or http://blog.mydomain01.com)
  4. sub domains using https (e.g. https://www.mydomain01.com or https://blog.mydomain01.com)
  5. any paths (e.g. http://mydomain01.com/some_path or https://www.mydomain01.com/another/path.html)

plus the same for all my other domains (mydomain02.com, mydomain03.com, etc.; each with the above use cases).

My research

  1. This AWS article explains how to redirect internet traffic from an apex domain to another domain (case #1 in my This includes list above) using AWS S3 and AWS Route 53: This works for http, but not for https.
  2. This AWS article explains how to redirect internet traffic for a number of cases (by the looks of it, covering all cases in my This includes list above) using AWS S3, AWS Route 53 and AWS CloudFront: This works for both http and https. (Also talks about using an Application Load Balancer, but I guess that's beyond the scope here...)
  3. This AWS article adds some more details on setting up a CloudFront distribution and how to get insight into log files.
  4. This AWS article documents redirection rules to use advanced conditional redirects: Not sure if I need to go there to accomplish my goal, so haven't really looked into that yet.

Plus, there are obviously plenty of SO questions (see Related to the right of this question) and other posts on the subject; problem with most of those is that they use screenshots from previous versions of the AWS Console UI: Most of the contents should still be the same, but correlating those screenshots to the current UI IMO adds another layer of confusion.

Key takeaways from the AWS (and other) docs:

  1. I need to create a bucket in AWS S3 and configure redirection in it,
  2. I need to create a distribution in AWS CloudFront;
  3. in order to use a custom domain in CloudFront, I need to create a certificate in AWS ACM,
  4. I need to create a hosted zone in AWS Route 53 and configure records in it.

My work so far

The latest AWS CLI is installed, region and output are configured in ~/.aws/config, credentials are set up in ~/.aws/credentials (each for every AWS account); AWS_* environment variables are exported.

I am using AWS region US East (N. Virginia) (us-east-1) for everything to prevent any additional issues caused by AWS resources not being available in a region.

$ aws --version  aws-cli/2.2.23 Python/3.9.6 Darwin/19.6.0 source/x86_64 prompt/off  

I omit any shell prompts or > shell line continuation characters for easier copying from this post into the shell.

Set up an S3 bucket

Warning: This creates an "all public" bucket without any access restrictions. In this case, this should not matter as there is no bucket contents to protect, but such a public bucket is a bad practice in general. Also, I'm using a public bucket to prevent any additional issues caused by access restrictions: First, get it work; second, make it secure.

create the bucket

aws s3api create-bucket --bucket mydomain01.com  
  • response:
{      "Location": "/mydomain01.com"  }  

set up redirection

aws s3api put-bucket-website --bucket mydomain01.com --website-configuration \      '{ "RedirectAllRequestsTo": { "HostName": "stackoverflow.com/users/217844/ssc" } }'  
  • no response

Gotcha: The S3 bucket name must match the apex domain name.

Using any bucket name but mydomain01.com (for my example) seems to fail without any indication as to the cause. The AWS docs don't really make this very clear - in fact, I am still not sure if I massively misunderstand something here, but from what I can tell, the official AWS documentation is actually somewhat sloppy on that - IMO - crucial key point: For example, #2 just says

  1. Create an S3 bucket with a global unique name.

which could be any globally unique name. #1 mentions that in some way - once you know how to read those bits...

On a side note, that article #2 continues to confuse me with

If you aren't using a custom domain ...

Why would I not be using a custom domain ?!? The whole point is to redirect my custom domain, isn't it ?!? Well, anyway...

Gotcha: Must not prepend protocol to hostname.

Neither the AWS Console nor the AWS CLI seem to test if a protocol (http:// or https://) was entered in the Host name UI field / passed in the HostName JSON string. However, if one is prepended, the redirect fails; see test redirection below.

Gotcha: AWS S3 Console UI bug.

After redirection has been set up, the AWS Console displays clickable link in its UI to the bucket URL (http://mydomain01.com.s3-website-us-east-1.amazonaws.com) at the very bottom of the bucket's Properties tab, in the Static website hosting section.

Clicking that link fails to open the page, seemingly because the AWS Console messes up the URL and tries to open http://https//stackoverflow.com/users/217844/ssc/, no matter the protocol.

test redirection

  • using HTTPie in the shell instead of curl or wget because that's what the cool kids seem to use nowadays
  • copy the link from AWS Console in browser to shell
http http://mydomain01.com.s3-website-us-east-1.amazonaws.com/  HTTP/1.1 301 Moved Permanently  Content-Length: 0  Date: Mon, 02 Aug 2021 12:39:09 GMT  Location: http://stackoverflow.com/users/217844/ssc/  Server: AmazonS3  x-amz-id-2: rakAqUMnRraGvo/WkSa6AnbuhWn/9YZX/CAlI/OJQKYoWp/OdQIbyhsvHSwNved3suwMdgglqpE=  x-amz-request-id: C5BBG833Q9TQ9J6X  

--> seems to work

  • test the redirection if the protocol was erroneously prepended to the host name; note the broken Location url:
http http://mydomain01.com.s3-website-us-east-1.amazonaws.com/  HTTP/1.1 301 Moved Permanently  Content-Length: 0  Date: Mon, 02 Aug 2021 12:52:10 GMT  Location: http://https://stackoverflow.com/users/217844/ssc/  Server: AmazonS3  x-amz-id-2: Ee2/ob0faTpRdp6mGITdmClozXNmF1Q2oTbPioms8O91VA8n5VA3MoHhveeFz7v2VS65YKFKlDA=  x-amz-request-id: ZJP653R50YD5HSRS  

My questions #1

NOTE: I had these questions when I started out writing this; I think I was able to answer them myself since (see test www sub domain record below). Someone please correct me if I'm wrong:

  1. Q: Does the "bucket name == domain name" requirement apply even if I use CloudFront ?
    A: Yes.
  2. Q: Do I need to create one bucket each for the apex domain and every subdomain ? so, in my example
    • mydomain01.com
    • www.mydomain01.com
    • blog.mydomain01.com ?
      A: Yes.

Set up an Route 53 hosted zone

create the hosted zone

aws route53 create-hosted-zone --caller-reference "$(date '+%Y%m%d-%H%M%S')" --name mydomain01.com  
  • response
{      "Location": "https://route53.amazonaws.com/2013-04-01/hostedzone/Z123456789EXAMPLE0SKX",      "HostedZone": {          "Id": "/hostedzone/Z123456789EXAMPLE0SKX",          "Name": "mydomain01.com.",          "CallerReference": "20210802-150736",          "Config": {              "PrivateZone": false          },          "ResourceRecordSetCount": 2      },      "ChangeInfo": {          "Id": "/change/C1234567890SKXEXAMPLE",          "Status": "PENDING",          "SubmittedAt": "2021-08-02T13:07:37.860000+00:00"      },      "DelegationSet": {          "NameServers": [              "ns-1234.awsdns-12.com",              "ns-5678.awsdns-34.co.uk",              "ns-1234.awsdns-56.net",              "ns-5678.awsdns-78.org"          ]      }  }  
  • take note of the hosted zone ID Z123456789EXAMPLE0SKX, needed in the next steps

create a record for the apex domain

{    "Changes": [      {        "Action": "CREATE",        "ResourceRecordSet": {          "Name": "mydomain01.com.",          "Type": "A",          "AliasTarget": {            "HostedZoneId": "Z3AQBSTGFYJSTF",            "DNSName": "s3-website-us-east-1.amazonaws.com",            "EvaluateTargetHealth": false          }        }      }    ]  }  

Gotcha: Must use verbatim s3-website-us-east-1.amazonaws.com for DNSName.

AWS docs talk in all sort of places about example.com or example.com.s3-website-us-east-1.amazonaws.com, etc. In this case, this is not some example to be replaced by own values (e.g. mydomain01.com.s3-website-us-east-1.amazonaws.com), but the verbatim value from the table, i.e. s3-website-us-east-1.amazonaws.com.

Gotcha: Must not prepend protocol to hostname.

Similar to the gotcha above, both AWS Console and the AWS CLI gladly accept a protocol (http:// or https://) prepended to the value entered in the Host name UI field / passed as DNSName. At least, this looks very wrong in the Console, e.g. http\072\057\057mydomain01.s3-website-us-east-1.amazonaws.com.

Both gotchas are somewhat mitigated in the AWS Console where values can be selected from a dropdown box when a record is created or edited; when using the AWS CLI, you must double-check what you send.

The same gotcha and mitigation applies to the Record name UI field / Name JSON value.

create a record for the apex domain, cont.

  • use jq for a quick test the temp file contains valid json
jq . < change-batch.apex.json 1> /dev/null  
  • no output --> valid JSON
aws route53 change-resource-record-sets --hosted-zone-id Z123456789EXAMPLE0SKX \      --change-batch "file://$(pwd)/change-batch.apex.json"  
  • response
{      "ChangeInfo": {          "Id": "/change/C1234567890EXAMPLESKX",          "Status": "PENDING",          "SubmittedAt": "2021-08-02T14:20:09.370000+00:00"      }  }  

test apex domain record

  • test http
http http://mydomain01.com  HTTP/1.1 301 Moved Permanently  Content-Length: 0  Date: Mon, 02 Aug 2021 15:06:08 GMT  Location: http://stackoverflow.com/users/217844/ssc/  Server: AmazonS3  x-amz-id-2: EfDtCxif2iV4eInskirSBAOjQS7o9arzJCeZjscF6mW7cwwmm9Nxb7QJT50x2kjdslX2fOxA+lk=  x-amz-request-id: WM7K9TDEF75A6P1V  

--> looks good

  • test http with a path
http http://mydomain01.com/some/path   ... similar output as above ...  
  • test https
http https://mydomain01.com    http: error: ConnectionError: HTTPSConnectionPool(host='mydomain01.com', port=443):    Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x101a48100>:      Failed to establish a new connection: [Errno 60] Operation timed out')) while doing a GET request to URL: https://mydomain01.com/  
  • (response wrapped for readability)

--> times out (after 60s ?) - as expected: redirecting using an S3 bucket does not work with https (see above)

Gotcha: DNS change propagation delay.

AWS and Google are very fast in terms of propagating changes to DNS settings (as in seconds or minutes), but there might be other, "slower" name servers involved. Bypass them as described here to eliminate that source of confusion. That approach only works in macOS, but the concept is the same for any OS.

Gotcha: Browser cache.

When testing DNS changes not in the shell, but in the browser, the browser might get results from its cache. I do most of my work using Chrome, but use Firefox (or Safari) for testing, so I can clear the entire cache before every test to eliminate that potential issue - without getting logged out of Google, AWS, etc.

create a record for www sub domain

  • the only difference is the Name JSON value
sed -e 's|mydomain01.com.|www.mydomain01.com.|g' change-batch.apex.json > change-batch.www.json  aws route53 change-resource-record-sets --hosted-zone-id Z123456789EXAMPLE0SKX \      --change-batch "file://$(pwd)/change-batch.www.json  
  • response similar as above

test www sub domain record

  • test http
http http://www.mydomain01.com  HTTP/1.1 404 Not Found  Content-Length: 363  Content-Type: text/html; charset=utf-8  Date: Mon, 02 Aug 2021 15:28:05 GMT  Server: AmazonS3  x-amz-id-2: MGLcynq1iEGKh+pT6N6iRpCuQSN243q/5zm2Y7rXTnM7iW9nvDokF6s20xEUBr7QiEtBPEzZmII=  x-amz-request-id: TK83G35EMYFR8SKX    <html>  <head><title>404 Not Found</title></head>  <body>  <h1>404 Not Found</h1>  <ul>  <li>Code: NoSuchBucket</li>  <li>Message: The specified bucket does not exist</li>  <li>BucketName: www.mydomain01.com</li>  <li>RequestId: TK83G35EMYFR8SKX</li>  <li>HostId: MGLcynq1iEGKh+pT6N6iRpCuQSN243q/5zm2Y7rXTnM7iW9nvDokF6s20xEUBr7QiEtBPEzZmII=</li>  </ul>  <hr/>  </body>  </html>  
  • I think that answers the second of My questions #1 above: I need one S3 bucket per apex/sub domain to forward.

Set up a CloudFront distribution

create a certificate

  • AWS ACM request-certificate docs
  • the certificate is supposed to work for the apex and all sub domains, so need to add another name to this certificate / pass --subject-alternative-names; see this AWS article (the upper blue box).
  • add quotes around *.mydomain01.com so the shell does not interpret the *
aws acm request-certificate --domain-name mydomain01.com --validation-method DNS \      --subject-alternative-names '*.mydomain01.com'  
  • response:
{      "CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-90ab-cdef-1234-1234567890ab"  }  
  • 123456789012 is my AWS account ID; everything after certificate/ is just a UUID

get certificate details

  • AWS ACM describe-certificate docs
  • save response to temporary local file; extract ResourceRecord.Name and ResourceRecord.Value using jq
  • needed for the AWS Route 53 record that proves I own mydomain01.com
  • alternatively, use --query parameter with aws acm describe-certificate
aws acm describe-certificate \      --certificate-arn "arn:aws:acm:us-east-1:123456789012:certificate/12345678-90ab-cdef-1234-1234567890ab" \   > describe-certificate.json  
jq -r '.Certificate.DomainValidationOptions[0].ResourceRecord.Name' describe-certificate.json  _1234567890abcdef1234567890abcdef.mydomain01.com.    jq -r '.Certificate.DomainValidationOptions[0].ResourceRecord.Value' describe-certificate.json   _1234567890abcdef1234567890abcdef.weirdchars.acm-validations.aws.  

create a Route 53 record for certificate validation

  • will be automatically checked by AWS ACM and the certificate will be validated once this record is found
  • as before, use a temporary local file change-batch.cert.json, see e.g. create a record for the apex domain; contents:
{    "Changes": [      {        "Action": "CREATE",        "ResourceRecordSet": {          "Name": "_1234567890abcdef1234567890abcdef.mydomain01.com.",          "Type": "CNAME",          "TTL": 300,          "ResourceRecords": [            {              "Value": "_1234567890abcdef1234567890abcdef.weirdchars.acm-validations.aws."            }          ]        }      }    ]  }  
  • shell command:
aws route53 change-resource-record-sets --hosted-zone-id Z123456789EXAMPLE0SKX \      --change-batch "file://$(pwd)/change-batch.cert.json  
  • response similar as when creating records above
  • NOTE: It might take a couple of minutes for ACM to validate the certificate.

create the CloudFront distribution

  • AWS CloudFront create-distribution docs
  • again, CallerReference must simply be a unique string; use e.g. date '+%Y%m%d-%H%M%S' in shell to create and copy into file; see create the hosted zone
  • as before, use a temporary local file create-distribution.json for complex values; contents below
  • MinimumProtocolVersion: get value from this AWS article
  • OriginProtocolPolicy: using http-only because the origin (the S3 bucket) can do only http
  • ViewerProtocolPolicy: using redirect-to-https as the whole point of creating this distribution is to redirect from http to https
  • NOTE: I don't know (and the AWS docs don't tell) which fields are mandatorily required; the AWS CLI command displays a clear and detailed message if something about the data sent is missing or wrong.
{      "CallerReference": "20210802-191725",      "Aliases": {          "Quantity": 2,          "Items": ["mydomain01.com", "*.mydomain01.com"]      },      "Origins": {          "Quantity": 1,          "Items": [              {                  "Id": "mydomain01.com.s3.us-east-1.amazonaws.com_20210802-191725",                  "DomainName": "mydomain01.com.s3.us-east-1.amazonaws.com",                  "CustomOriginConfig": {                      "HTTPPort": 80,                      "HTTPSPort": 443,                      "OriginProtocolPolicy": "http-only"                  }              }          ]      },      "OriginGroups": {          "Quantity": 0      },      "DefaultCacheBehavior": {          "TargetOriginId": "mydomain01.com.s3.us-east-1.amazonaws.com_20210802-191725",          "ForwardedValues": {              "QueryString": false,              "Cookies": {                  "Forward": "none"              },              "Headers": {                  "Quantity": 0              },              "QueryStringCacheKeys": {                  "Quantity": 0              }          },          "TrustedSigners": {              "Enabled": false,              "Quantity": 0          },          "ViewerProtocolPolicy": "redirect-to-https",          "MinTTL": 0,          "AllowedMethods": {              "Quantity": 2,              "Items": [                  "HEAD",                  "GET"              ],              "CachedMethods": {                  "Quantity": 2,                  "Items": [                      "HEAD",                      "GET"                  ]              }          },          "SmoothStreaming": false,          "DefaultTTL": 86400,          "MaxTTL": 31536000,          "Compress": false,          "LambdaFunctionAssociations": {              "Quantity": 0          },          "FieldLevelEncryptionId": ""      },      "CacheBehaviors": {          "Quantity": 0      },      "CustomErrorResponses": {          "Quantity": 0      },      "Comment": "",      "Logging": {          "Enabled": false,          "IncludeCookies": false,          "Bucket": "",          "Prefix": ""      },      "PriceClass": "PriceClass_All",      "Enabled": true,      "ViewerCertificate": {          "ACMCertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-90ab-cdef-1234-1234567890ab",          "MinimumProtocolVersion": "TLSv1.2_2021",          "SSLSupportMethod": "sni-only"      },      "Restrictions": {          "GeoRestriction": {              "RestrictionType": "none",              "Quantity": 0          }      },      "WebACLId": "",      "HttpVersion": "http2",      "IsIPV6Enabled": true  }  
  • shell command:
  • NOTE: add --no-cli-pager to disable paging and store response in temporary local file for inspection
aws --no-cli-pager cloudfront create-distribution \      --distribution-config "file://$(pwd)/create-distribution.json"   > create-distribution.response.json  
  • response: a large JSON structure, mostly the config sent with some distribution meta info

Gotcha: CloudFront distribution takes a moment to deploy.

In the Distributions overview, there is a Last modified field that says Deploying for a while after every change; depending on screen and browser window width, this field might be hidden, so the UI might looks like the distribution is up and running while in fact, it is not.

test distribution

  • get DomainName from response
jq -r '.Distribution.DomainName' create-distribution.response.json  abcdefghij1234.cloudfront.net  
  • test http
http http://abcdefghij1234.cloudfront.net  HTTP/1.1 301 Moved Permanently  Connection: keep-alive  Content-Length: 183  Content-Type: text/html  Date: Mon, 02 Aug 2021 20:14:27 GMT  Location: https://abcdefghij1234.cloudfront.net/  Server: CloudFront  Via: 1.1 8640a37b586353bc916562c577770223.cloudfront.net (CloudFront)  X-Amz-Cf-Id: ooT0Y1QvDE7_yoRmb0p0Un2Db6O713rBvudtmz1xer7YwEU0GE8smw==  X-Amz-Cf-Pop: HAM50-C2  X-Cache: Redirect from cloudfront    <html>  <head><title>301 Moved Permanently</title></head>  <body bgcolor="white">  <center><h1>301 Moved Permanently</h1></center>  <hr><center>CloudFront</center>  </body>  </html>  

So the distribution redirects from http://abcdefghij1234.cloudfront.net to https://abcdefghij1234.cloudfront.net - as it should; that's what it was created for.

  • test https
HTTP/1.1 403 Forbidden  Connection: keep-alive  Content-Type: application/xml  Date: Mon, 02 Aug 2021 20:14:35 GMT  Server: AmazonS3  Transfer-Encoding: chunked  Via: 1.1 c3e656776c8a9f0e1ea24405ab1dcc85.cloudfront.net (CloudFront)  X-Amz-Cf-Id: or4SC8urWEv_8c3jDURv5IINwFU1TDVLSQ3_X7tya7Ncz8ujyz0-IQ==  X-Amz-Cf-Pop: HAM50-C2  X-Cache: Error from cloudfront  x-amz-bucket-region: us-east-1    <?xml version="1.0" encoding="UTF-8"?>  <Error>    <Code>AccessDenied</Code>    <Message>Access Denied</Message>    <RequestId>EAST1CM5WJR8QM3S</RequestId>    <HostId>zF2dJm2vsuSM633NHuzcA5VqrCrNkfYGu31FRmKKIkebuI5+6l5DlVnr4kk9be262hcqktoiROw=</HostId>  </Error>  
  • (xml formatted for readability)

That does not look good. Not sure if that is to be expected ?!?

Update the AWS Route 53 records

  • change from using the S3 bucket to using the CloudFront distribution
  • as before, create a temporary local file change-batch.apex.updatejson by seding the earlier file change-batch.apex.json
  • use UPSERT instead of CREATE: The record exists already and must be updated.
  • HostedZoneId: replace old value Z3AQBSTGFYJSTF (for S3) by Z2FDTNDATAQYW2, some magic value taken from change-resource-record-sets docs
  • DNSName: quote from change-resource-record-sets docs

Specify the domain name that CloudFront assigned when you created your distribution.

Your CloudFront distribution must include an alternate domain name that matches the name of the resource record set. For example, if the name of the resource record set is acme.example.com, your CloudFront distribution must include acme.example.com as one of the alternate domain names.

--> replace s3-website-us-east-1.amazonaws.com (for S3) by abcdefghij1234.cloudfront.net:

sed -e 's|CREATE|UPSERT|g' \      -e 's|Z3AQBSTGFYJSTF|Z2FDTNDATAQYW2|g' \      -e 's|s3-website-us-east-1.amazonaws.com|abcdefghij1234.cloudfront.net|g' \      change-batch.apex.json > change-batch.apex.update.json  
  • file contents:
{    "Changes": [      {        "Action": "UPSERT",        "ResourceRecordSet": {          "Name": "mydomain01.com.",          "Type": "A",          "AliasTarget": {            "HostedZoneId": "Z2FDTNDATAQYW2",            "DNSName": "abcdefghij1234.cloudfront.net",            "EvaluateTargetHealth": false          }        }      }    ]  }  
  • shell command
aws route53 change-resource-record-sets \      --hosted-zone-id Z123456789EXAMPLE0SKX \      --change-batch "file://$(pwd)/change-batch.apex.update.json"  
  • response similar as when creating records above

test apex domain record

  • test http
http http://mydomain01.com  HTTP/1.1 301 Moved Permanently  Connection: keep-alive  Content-Length: 183  Content-Type: text/html  Date: Mon, 02 Aug 2021 19:08:27 GMT  Location: https://mydomain01.com/  Server: CloudFront  Via: 1.1 2408979685aa1bdb752824d292e63bf7.cloudfront.net (CloudFront)  X-Amz-Cf-Id: Ww60Ol_0fdR8SsgcHeRYUd_de1rVejX6w_wuK80aR21e3IHstB-irA==  X-Amz-Cf-Pop: HAM50-C2  X-Cache: Redirect from cloudfront    <html>  <head><title>301 Moved Permanently</title></head>  <body bgcolor="white">  <center><h1>301 Moved Permanently</h1></center>  <hr><center>CloudFront</center>  </body>  </html>  
  • the response now comes from CloudFront and no longer from S3, so the updated DNS record seems to work :-)
  • test http with a path
http http://mydomain01.com/some/path   ... similar output as above ...  

--> looks good

  • test https
http https://mydomain01.com  HTTP/1.1 403 Forbidden  Connection: keep-alive  Content-Type: application/xml  Date: Mon, 02 Aug 2021 18:53:51 GMT  Server: AmazonS3  Transfer-Encoding: chunked  Via: 1.1 ea89c67081222c8c680e7a37ad75f4f0.cloudfront.net (CloudFront)  X-Amz-Cf-Id: 5prv5_g5zXOX3aRBp2Gq64JJPuwC2o5dHIp9RCAHm6Ls8hK6EFghXw==  X-Amz-Cf-Pop: HAM50-C2  X-Cache: Error from cloudfront  x-amz-bucket-region: us-east-1    <?xml version="1.0" encoding="UTF-8"?>  <Error>    <Code>AccessDenied</Code>    <Message>Access Denied</Message>    <RequestId>T78ASF3FA9QGV4T5</RequestId>    <HostId>xaEgwEtbeesL4XfxMdxVoPAt9Lpb1ZDM9Fs5W4htBbcWNbV9sMUTjVAPIuWwAQ3Xh1yRhh4b4Ts=</HostId>  </Error>  
  • (as before, xml formatted for readability)

NOTE: This response comes from AmazonS3, not from CloudFront as the previous one. The S3 bucket has no access restrictions whatsoever in place - so how can there be an access denied ?!?

double-check bucket permissions

aws s3api get-bucket-policy --bucket mydomain01.com    An error occurred (NoSuchBucketPolicy) when calling the GetBucketPolicy operation: The bucket policy does not exist  

That matches the empty Bucket policy field in the AWS S3 Console - but is it really ok that there is no bucket policy at all ?!?

Now that look back at test distribution above, I see that the response to accessing abcdefghij1234.cloudfront.net directly also come from S3, not from CloudFront, so the problem seems to be pretty clear:

My questions #2

  1. Why does the S3 bucket deny access ?
  2. Is it normal for an S3 bucket to have no access policy at all ? Don't usually have "public" buckets a policy that explicitly allows access to anyone ?
  3. Similar to one S3 bucket per apex / sub domain, do I also need one CloudFront distribution per apex / sub domain ?
  4. If so, I guess adding *.mydomain01.com as alternate domain to the certificate (and the distribution) does not really make any sense, does it ?!? I'd also need one certificate per distribution, dedicated to one domain, correct ?

How to redirect all Apache 2.4 websites to maintenance page while allowing access to specified IP addresses

Posted: 21 Aug 2021 03:02 PM PDT

I have two mirrored Apache 2.4 servers behind a load balancer with about 50 websites hosted on each. I need to close them for maintenance while allowing access from several specified IP addresses. During the maintenance, the maintenance.html page should be presented to the visitors. I can't close it on the load balancer (which I initially wanted), so I need to make it through Apache configuration on both servers. Does anyone know what's the most effective and the simplest method?

I've already read many similar posts but I could not find the right answer that actually works. Many thanks!

Scheduler not working on windows server 2016 EC2 instance aws, while EC2 is running?

Posted: 21 Aug 2021 07:30 PM PDT

I have setup a few batch files to execute python scripts on window. I have confirmed the batch files work by double click them which lauches CMD to show the scripts running.

I have setup a task scheduler to kick off the batch files, and tested it by hitting the 'run' button on task scheduler GUI.

The EC2 instance is always up and running, but when I close my Remote Desktop application for the EC2 instance, the task scheduler does not kick off my scripts. What am I doing wrong? I want the task scheduler to run regardless if I can see the desktop or not.

I am using a mac to remote into the EC2 instance if that helps. Also very new to working with windows as indepth as this.

Thank you in advance.

Edit:

bat file settings:

@echo off  python C:\folder\folder\pythonscript.py %*  pause  

Scheduler settings:

General Tab:  (checked) Run whether user is logged on or not  (checked) Run with highest privileges  Running as admin on local computer    Trigger Tab:  (checked) Daily  Recur every 1 day  (checked) repeat task every 1 min for duration 'indefinite" ----this is for testing  (checked) Enabled    Actions Tab:  Action: start a program  Program/script: C:\folder\bat_files\test.bat  Add Arguments(optional): blank  Start In(optional): blank    Conditions Tab:  (checked) start the task only if the computer is on AC power  (checked) Stop if computer switches to battery power  (checked) Wake the computer to run task    Settings Tab:    (checked) Allow task to be run on demand  (checked) Run task as soon as possible after schedule is missed  (checked) if task fails, restart every 1 min  

Azure VPN Site-to-site connected but host not reachable

Posted: 21 Aug 2021 03:02 PM PDT

Using Azure gateway VPN I created a site to site connection with another vpn device (checkpoint) over which I have no control (customer endpoint).

I created the connection, using their public ip, declared the secret key and for local address space I discussed with the client what 'local' IP is desired from both sides. We agreed to an IP in the 172.0.0.0 range.

The gateway connection says succeeded/connected, and I see very little traffic in the data-out field (kb's not mb's).

However, when I try to ping the local address space (172.xxx.xxx.xxx) from my windows server 2016 VM I only get Request timed out-errors.

Do I need to create additional routes in windows? I tried adding route

  route -p ADD 172.xxx.xxx.xxx MASK 255.255.255.255 0.0.0.0  

but the host is still unreachable.

Any Ideas? Thanks

EDIT: added some progress below

Thanks, I allowed the ping and I can now ping my VPN Gateway from my Azure VM (which is 10.XXX.XXX.4). I then added the route "route -p ADD 172.xxx.xxx.xxx MASK 255.255.255.255 10.XXX.XXX.4"

and via tracert I can see the 172 address is routed to/via de vpn gateway, but then it times out. Does this mean the issue now is on the on-premise side?

Edit 2

By now, when running the vpn diag. log I do see some traffic, but I still cannot reach the other side.

Connectivity State : Connected  Remote Tunnel Endpoint : XXX.XXX.XXX.XXX  Ingress Bytes (since last connected) : 360 B  Egress Bytes (since last connected) : 5272 B  Ingress Packets (since last connected) : 3 Packets  Egress Packets (since last connected) : 130 Packets  Ingress Packets Dropped due to Traffic Selector Mismatch (since last connected) : 0 Packets  Egress Packets Dropped due to Traffic Selector Mismatch (since last connected) : 0 Packets  Bandwidth : 0 b/s  Peak Bandwidth : 0 b/s  Connected Since : 9/18/2017 5:33:18 AM  

SSL Cipher Suite Order GPO

Posted: 21 Aug 2021 05:02 PM PDT

Thanks in advance for reading. I'm using Win Server 2012 R2 to dish out group policies.

I've created a GPO to define the SSL Cipher Suite Order under Policies > Admin Templates > Network > SSL Confugration Settings and have set it to "Enabled".

I'm using a list of strong cipher suites from Steve Gibsons website found here.

I've put them all on 1 long line as it states to do.

I've also manipulated a default registry value located at:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\Default\00010002   

These are the same values I'm using from Gibsons site - on separate lines with no commas

My registry values change but I cannot get the SSL Configuration settings to display "Enabled"

Does anyone have insight on how to correct this issue?

linux network monitoring, average MBps each 1hr

Posted: 21 Aug 2021 06:04 PM PDT

I want to monitor the average network usage for my Debian server.

Ive tried to mess with dstat, ntop and couple other programs but nothing seems to work like I want to.

Basically I want a program/script that outputs an average network stats each X time. Whats the best way to do what I need ?

Greetings,

Missing credentials for roles in Heat orchestration on OpenStack?

Posted: 21 Aug 2021 07:02 PM PDT

I am trying out Heat orchestration on OpenStack. When setting up a single instance stack using the configuration below, I am getting this error:

Error: ERROR: Missing required credential: roles [u'_member_']

What could be the issue here?

Thanks!

heat_template_version: 2013-05-23    description: Simple template to deploy a single compute instance    resources:    my_instance:      type: OS::Nova::Server      properties:        image: CentOS-6-x86_64-GenericCloud-2016-04-05        flavor: c1-tiny        key_name: mine        networks:          - network: private_network  

Containerized PostgreSQL with data on a distributed file system

Posted: 21 Aug 2021 04:07 PM PDT

I am curious if somebody is actually running PostgreSQL in a container in production on some form of distributed file system - GlusterFS preferably, or anything.

I am currently running Mesos/Marathon. In case the PostgreSQL node fails, Marathon simply launches another instance of PostgreSQL on other nodes and if done properly (service discovery and application recovering from database connection loss), the ultimate fault tolerance will be achieved.

I know PostgreSQL has its own HA solutions, like log shipping and hot stand-by backup, but then one still need to solve the problem on when to switch from master to slave, how to do it properly and so on.

So, how do you run PostgreSQL in production on GlusterFS or similar? If so, is it stable? How about performance?

Promiscuous mode in KVM

Posted: 21 Aug 2021 08:04 PM PDT

I have cloud system based on Openstack Icehouse-version. Now I want to test newer version of Openstack (Juno) inside my existing cloud. All hosts and guests use Ubuntu 14.04 as their OS. KVM is hypervisor I am using.

So I created virtual machines on my cloud and installed components of Openstack Juno on them. But I have problems with network connectivity on these virtual machines.

Openstack installation guide says:

If you are building your OpenStack nodes as virtual machines, you must configure the hypervisor to permit promiscuous mode on the external network.

But it does not tell how this is done. Neither was I able to find this information by Googling. I have tried many things such as enabling promiscuous mode on various interfaces with command: ifconfig eth0 promisc but nothing has worked. So how can I enable promiscuous mode on my hypervisor?

EDIT: When using ifconfig I see that my interfaces are in state UP BROADCAST RUNNING PROMISC. Also I have used similar installation before installed on physical hosts and it had no problems.

CentOS cifs mount point fails after reboot. permission denied error 13

Posted: 21 Aug 2021 04:07 PM PDT

I'm using CentOS release 5.10 I have a mount point setup in /etc/fstab that was working, but now it doesn't. After a reboot the mount point doesn't exist and running sudo mount -a results in: mount error 13 = Permission denied

The entry in my fstab looks like this:

//my.server/my\040Folder/MY\040SUBFOLDER/other\040folder       /var/ftp/virtual_users/myfolder cifs username=mydomain\134myuser,password=mypassword  1 1  

I've tried mounting manually in the command line using this:

$ sudo mount -t cifs "//my.server/my Folder/MY SUBFOLDER/other folder"   /var/ftp/virtual_users/myfolder --verbose  -o username=myuser,password=mypassword,domain=mydomain  

My result is: mount.cifs kernel mount options: unc=//my.server\my Folder,ip=192.168.150.100,ver=1,rw,username=myuser,domain=mydomain,prefixpath=MY SUBFOLDER/other folder,pass=********

mount error 13 = Permission denied Refer to the mount.cifs(8) manual page (e.g.man mount.cifs)

I can successfully log in with smbclient:

$ smbclient "//my.server/my Folder" -U myuser -W mydomain  

and from there I can cd into the MY "SUBFOLDER/other folder" directory.

After much google searching, many fixes involved setting the Security mode. I tried ntlm,ntlmi,ntlmv2,ntlmv2i but none of the options changed the output.

There is a mount entry for another folder on this same server, which is working, but it doesn't go down to a sub folder:

//my.server/other /var/ftp/virtual_users/other cifs username=mydomain\134myuser,password=mypassword,nobrl,noperm  1 1  

I also tried adding the noperm and nobrl options to my problem mount, but no changes.

The System Admin of the windows server (my.server) verified that myuser has full control of all the folders I'm trying to access.

How can I access my tomcat server running on my network remotely

Posted: 21 Aug 2021 06:04 PM PDT

I have a tomcat server running on my pc which I can access locally via: http://localhost:9090 (I changed the ports in the server.xml file). Now, I am having trouble accessing the tomcat server remotely (i.e. a different machine, I can test this with machines outside or inside of my LAN).

Anyway, what I have tried is using the netgear genie interface to configure port forwarding. I have set all internal and external ports to 9090 (I tried using 80 for internal and 8080 for external but I they were already being used).

When I point my browser to: http://my-ip-address:9090 and it just times out... If I don't add a port, I am prompted with a username and password with a message box that says:

A username and password are being requested by http://my-ip-address. The site says: "SMC Dual WAN Load balancing VPN Router Administration Tools"

This is NOT the same as the username and password that I needed to login to the netgear genie interface. Note that I use Ubuntu (12.04, 13.04 and 14.04) and tomcat7.

Please help me get connected to my tomcat server remotely.

Thanks for all the help, and let me know if you need any more information.

My two-way trust with selective auth seems to behave opposite to a one-way trust

Posted: 21 Aug 2021 09:04 PM PDT

I'm not sure why I'm the only one running into this, I think it's a larger problem with Server 2012 and RDS protocols... With 2008 machines, you can use utilize a one way trust to authenticate across domains with the TSGateway service, but with 2012 it breaks when running across a one way trust. I'm trying to implement a two way trust that acts like a one way trust for everything but kerberos auth for things like TSGateway and RDS services...


A little backstory, I've currently got two domains (A and B) with a one way, external trust. (Outgoing trust on A, users in B can access devices in A)

At the moment, I can log into a computer in domain A, and add a user from domain B with the GUI. (I can also do it from the CLI, but that's not relevant here)

When I build my test domain, I can recreate this behavior. If I create the test domain with a two-way trust, domain-wide authentication in both directions this behavior doesn't change, though it does allow me to auth in the reverse direction (duh) which I don't want.

When I change Domain B to 'selective authentication' for some reason the Users and Computers GUI stops working as expected.

  • For Domain B computers, I can still browse the GUI like normal, and even add Domain A users, though they're not allowed to log in, due to the selective Auth setup.
  • For Domain A computers, browsing the GUI doesn't allow the selection of users or groups, and the advanced search turns up an error that says: "The following error prevented the display of any items: Unspecified error"
  • For Domain A computers, if I know the username from Domain B, I can add the account using the 'net localgroup' commands and everything works just fine, but the GUI is broken, and this won't likely be a usable solution for the majority of our users...

My question (Sorry to take so long to get to it) is why does selective auth change the behavior of the trust so that it behaves differently than a one-way trust, and is there some simple thing I'm missing?

When I get the 'unspecified' error from the GUI, I get an error on the DC for Domain B:

A Kerberos service ticket was requested.

Account Information: Account Name: bob@DOMAINA Account Domain: DOMAINA Logon GUID: {00000000-0000-0000-0000-000000000000}

Service Information: Service Name: ldap/DC.DOMAINB/DOMAINB Service ID: NULL SID

Network Information: Client Address: ::ffff:192.168.18.70 Client Port: 62103

Additional Information: Ticket Options: 0x40800000 Ticket Encryption Type: 0xFFFFFFFF Failure Code: 0xC Transited Services: -

This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested.

This event can be correlated with Windows logon events by comparing the Logon GUID fields in each event. The logon event occurs on the machine that was accessed, which is often a different machine than the domain controller which issued the service ticket.

Ticket options, encryption types, and failure codes are defined in RFC 4120.

I don't understand why it tries to authenticate against DomainB using 'bob' from DomainA, when I provided DomainB credentials...

Thanks for any help you can provide, I've been banging on this for 3 days straight and haven't found anything useful yet.

script not found or unable to stat: /usr/lib/cgi-bin/php-cgi

Posted: 21 Aug 2021 06:26 PM PDT

I have just seen a new series of error in the /var/log/apache2/error.log

[Thu Oct 31 06:59:04 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php

[Thu Oct 31 06:59:08 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php5

[Thu Oct 31 06:59:09 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php-cgi

[Thu Oct 31 06:59:14 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php.cgi

[Thu Oct 31 06:59:14 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php4

This server is running Ubuntu 12.04lts.

I have never seen this sort of attack before, should i be concerned or securing my system in any way for them?

Thanks, John

Cloud Server Error - File Does Not Exist: /var/www/html/public

Posted: 21 Aug 2021 10:00 PM PDT

I recently moved a webapp, built using Laravel, to a rackspace cloud server.

The homepage resolves just fine as I have the root set in the apache config.

However, when a request is made to any of the routes, the server attempts to look for an actual file with the name of the route. For example:

If I request

www.mywebapp.com/login  

The server error log shows

File Does Not Exist: /var/www/html/public/login  

Part of my apache config

<Directory/>  Options FollowSymLinks  AllowOverride None  </Directory>  

And my .htaccess which is located in the public folder

# Apache configuration file  # http://httpd.apache.org/docs/2.2/mod/quickreference.html    # Note: ".htaccess" files are an overhead for each request. This logic should  # be placed in your Apache config whenever possible.  # http://httpd.apache.org/docs/2.2/howto/htaccess.html    # Turning on the rewrite engine is necessary for the following rules and  # features. "+FollowSymLinks" must be enabled for this to work symbolically.    <IfModule mod_rewrite.c>      Options +FollowSymLinks      RewriteEngine On  </IfModule>    # For all files not found in the file system, reroute the request to the  # "index.php" front controller, keeping the query string intact    <IfModule mod_rewrite.c>      RewriteCond %{REQUEST_FILENAME} !-f      RewriteCond %{REQUEST_FILENAME} !-d      RewriteRule ^(.*)$ index.php/$1 [L]  </IfModule>  

This is my first web app so I am very new to all of these concepts and have no formal training. Constructive criticism please. All help is greatly appreciated.

Edit 1: .htaccess file corrected

All PHP sites stopped working on IIS7, internal server error 500

Posted: 21 Aug 2021 09:04 PM PDT

I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008.

Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites.

There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway)

The log file is flooded with messages such as

[09-Aug-2011 09:08:04] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:08:16] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:08:16] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:08:20] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:08:22] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:08:51] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:09:56] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:09:57] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:12:13] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:15:09] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:15:09] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:21:28] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  [09-Aug-2011 09:21:28] PHP Fatal error:  Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0  

I have tried increasing the memory limit in php.ini as such:

memory_limit = 512MB  

But that doesn't seem to solve the problem either.

This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled.

PHP is not enabled. Register new PHP version to enable PHP via FastCGI

So I tried to register the php version again

C:\Program Files\PHP\v5.3\php-cgi.exe  

But when I try to apply the changes I get

There was an error while performing this operation Details: Operation is not valid due to the current state of the object

There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore.

PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power

Update These seem to be two different issues

  1. I used memory_limit=128MB instead of memory_limit=128M
    Notice the "M" instead of "MB"
  2. A memory_limit of 128M was not enough, had to increase it to 512M

The first issue caused internal server errors for every request.

Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available)

So the problem remains unsolved

SAP Homogeneous copy : How do you handle BDLS steps

Posted: 21 Aug 2021 05:02 PM PDT

Part of SAP Homogeneous copy, we almost always need to perform the BDLS Step. This can cause a lot of grief cause this step can take few (or a lot) of hours.

How do you manage it ? any tips and trick ?

Apache/wsgi "Script timed out before returning headers"

Posted: 21 Aug 2021 08:04 PM PDT

I have a custom Django app that's becoming unresponsive roughly every 5,000 requests. In the apache logs, I see see the following:

Apr 13 11:45:07 www3 apache2[27590]: **successful view render here**  ...  Apr 13 11:47:11 www3 apache2[24032]: [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting  Apr 13 11:47:43 www3 apache2[24032]: [error] server reached MaxClients setting, consider raising the MaxClients setting  ...  Apr 13 11:50:34 www3 apache2[27617]: [error] [client 10.177.0.204] Script timed out before returning headers: django.wsgi  (repeated 100 times, exactly)  

I believe I am running WSGI 2.6 (/usr/lib/apache2/modules/mod_wsgi.so-2.6) with the following config:

apache config

WSGIDaemonProcess site-1 user=django group=django threads=50  WSGIProcessGroup site-1  WSGIScriptAlias / /somepath/django.wsgi  

/somepath/django.wsgi

import os, sys  sys.path.append('/home/django')  os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'      import django.core.handlers.wsgi      application = django.core.handlers.wsgi.WSGIHandler()  

When this happens, I can kill the wsgi process and the server will recover.

>ps aux|grep django # process is running as user "django"  django   27590  5.3 17.4 908024 178760 ?       Sl   Apr12  76:09 /usr/sbin/apache2 -k start  >kill -9 27590  

This leads me to believe that the problem is a known issue:

deadlock-timeout=sss (2.0+)

Defines the maximum number of seconds allowed to pass before the daemon process is shutdown and restarted after a potential deadlock on the Python GIL has been detected. The default is 300 seconds. This option exists to combat the problem of a daemon process freezing as the result of a rouge Python C extension module which doesn't properly release the Python GIL when entering into a blocking or long running operation.

However, I'm not sure why this condition is not clearing automatically. I do see that the script timeout occurs exactly 5 minutes after the last successful page render, so the deadlock-timeout is getting triggered. But it does not actually kill the process.

Edit: more info

  • apache version 2.2, using the worker MPM
  • wsgi version 2.8
  • SELinux NOT installed l
  • xml package being used, infrequently
  • Ubuntu 10.04

Is there a way to make a "Screen" session survive reboot?

Posted: 21 Aug 2021 07:36 PM PDT

I am using the Screen utility and would like to preserve the session when the machine reboots. If not possible may be you can recommend other alternatives to Screen that would allow to preserve the sessions between reboots.

I am using Ubuntu Server 10.04 (Lucid Lynx) if that matters.

I have several sessions opened via Screen. When the machine reboots all those sessions are lost, and I have to reopen them again. I wanted to find a way to preserve those Screen sessions.

Help: Setting up a basic live stream viewable in a browser

Posted: 21 Aug 2021 07:02 PM PDT

The end goal of my project is to create a system which records TV from a TV capture card, streams it live viewable in a web page, and stores a copy on the hard drive. It seems like a rather simple concept, but I've been struggling with this for weeks. I've asked on the Ubuntu help forums, the VideoLAN.org forums and now here. Someone out there has to have done something like this without using one of the expensive streaming servers (Adobe Streaming Server/Wowza).

The key point I'm stuck at is the live stream because it has the following characteristics.

  1. When a user begins viewing the stream they start at the current point, not the beginning (this can be done with any seekable system, even a psuedostreamer).
  2. The stream needs to dynamically update such that the player (Flowplayer or JWplayer) can continue to show the newly encoded data. I tried lighttpd's mod_flv_streaming and ran into the issue which is that once a user begins streaming, the player considers the file "finished" and will not retrieve new data from the server even though new data is added every second.

My nearest attempt was using VLC streaming over HTTP. I used the following encoding line:

:sout=#transcode{vcodec=h264,vb=800,scale=1,width=320,height=240,acodec=mp4a,ab=128,channels=2,samplerate=44100}:std{access=http,dst=192.168.0.75:8080/file.flv}  

Debugs

  1. WORKS - Access stream locally (same Ubuntu box that is streaming it) in a separate instance of VLC accessing - http://192.168.0.75:8080/file.flv.

  2. SEMI-WORKS - Access the stream from another computer on the LAN. I say semi works because it takes 45 seconds to a minute in order to load the stream, which is odd and signals that something is awry.

  3. WORKS - I am able to get Flowplayer to play the stream when accessed from the local Ubuntu box pointing to it's own Apache web server by pointing Firefox to http://192.168.0.75/flowplayer/example/index.html (which references the stream athttp://192.168.0.75:8080/file.flv`).
  4. SEMI BARELY HORRIBLY WORKS - If I attempt to access that same html file from a computer on the LAN, the player shows up, has the swirling logo for a moment, and then appears blank with just the text "flowplayer" in the bottom left. No video, no sound, just blank. Mousing over it displays the controls. Oddly, if I leave the browser open for hours, and I mean hours eventually the video will appear and begin streaming live.

My main questions center around the following concepts: Should I be using VLC's RTP/RTSP/RTMP? If so, how do I set that up? I've tried a billion times and have yet to get something set up locally, let alone remotely. Am I solely restricted to FLV files? All that matters is that Flowplayer can play it in a cross-browser compliant manner, so might I have better luck with a different container? WTF is a .ts file/segmenter? Is my only option trying to get something like Red5 working, or buying one of the expensive servers? If so, why does VLC have a RTP option, yet it never works?

Any guidance, or suggestions would be greatly appreciated. Here's my original thread on VLC Forums which unfortunately got crickets.

Fatal error: Incompatible file format: The encoded file has format major ID 1, whereas the Loader expects 4 in ... on line 0

Posted: 21 Aug 2021 10:00 PM PDT

I am using Ubuntu 10.04 and for some time I had to keep a downgraded PHP 5.2 package because I need to run Zend encrypted scripts. Recently I noticed that Zend released beta version of their loader (http://forums.zend.com/viewtopic.php?f=57&t=1365&start=80#p22073) so I updated to the native PHP 5.3 package, downloaded the .so file, added this to php.ini

;zend_extension=/etc/php5/ZendOptimizer.so  zend_extension=/etc/php5/ZendGuardLoader.so  zend_loader.enable=1  zend_loader.disable_licensing=0  zend_loader.obfuscation_level_support=3  

and restarted the server. Now I am getting this error:

Fatal error: Incompatible file format: The encoded file has format major ID 1, whereas the Loader expects 4 in ... on line 0

Do you by chance know an easy fix for this? Or should I downgrade back and wait till when they release something more stable?

No comments:

Post a Comment