Saturday, May 1, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


RAID type for 12x 10TB drives

Posted: 01 May 2021 08:25 PM PDT

What is the best RAID type for a 12-bay Synology NAS with 12x 10TB drives? At the moment, it's configured as RAID5. I'm thinking of RAID6 - but is RAID6 recommendet for such large drives? How big is the risk of URE? What about rebuild times? The NAS will be used as backup repository for Veeam.

Java EE / JSF and Nginx: uploading files fails when server behind Nginx

Posted: 01 May 2021 04:25 PM PDT

with JSF 2.3, Jakarta EE 8 and Wildfly 23 / Payara 5

Uploading a file with <h:input> or <p:fileUpload> works fine but fails when Nginx is turned on. The file is never received by the backing bean.

  1. is there any configuration to add to the server? (Payara or Wildfly)
  2. the Nginx config file has surely errors in it?

app.conf:

upstream payara{      least_conn;        server localhost:8080 max_fails=3 fail_timeout=5s;      server localhost:8181 max_fails=3 fail_timeout=5s;  }  server {      if ($host = nocodefunctions.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen        80;      access_log /var/log/nginx/payara-access.log;      error_log /var/log/nginx/payara-error.log;      #Replace with your domain      server_name   nocodefunctions.com;      return        301 https://$host$request_uri;      }    server {      listen        443 ssl;      server_name   nocodefunctions.com;      ssl_certificate /etc/letsencrypt/live/xxxxx/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/xxxxx/privkey.pem; # managed by Certbot        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;      ssl_prefer_server_ciphers on;      ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";        location /nocodeapp-web-front-1.0 {              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header Host $http_host;              proxy_set_header X-Forwarded-Proto https;                           proxy_connect_timeout      240;              proxy_send_timeout         240;              proxy_read_timeout         240;                proxy_http_version 1.1;                proxy_set_header Upgrade $http_upgrade;              proxy_set_header Connection "upgrade";              proxy_pass http://payara$request_uri;      }            location = / {              proxy_pass http://payara;              return 301 https://nocodefunctions.com/nocodeapp-web-front-1.0;      }    }  

ERROR 1064 (42000) at line 27 when import Mysql database

Posted: 01 May 2021 04:16 PM PDT

When run: mysql -p -u db db < db322021123.sql I see : ERROR 1064 (42000) at line 27: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF; SET inc_minue=0; END IF; ' at line 27

This is the line 27 but I cannot know the error please help

CREATE DEFINER=db322021123@localhost FUNCTION system_hours_diff (start_date INT, end_date INT, include_hours VARCHAR(255), exclude_days VARCHAR(64), exclude_holidays TINYINT(1)) RETURN$

Ubuntu 18.04

NGINX URL redirect matching a specific rule

Posted: 01 May 2021 03:03 PM PDT

I am not new to nginx but unfortunately never understood well the redirect/rewrite rules. I have a dilemma and already tried what I know with no success.

What I want is a simple URL rewrite/redirect, when I type in browser bar: https://example.com/chart.php?id=1234 the URL to automatically transform into the following and ofc showing the same content as original: https://example.com/chart/1234

I already tried in many ways ex:

location /chart/{        rewrite ^chart/([0-9]+)/?$ chart.php?id=$1 break;        proxy_pass  _to_apache;      }  

Many thanks in advance!

Does installing an SSL cert have anything to do with DNS?

Posted: 01 May 2021 02:39 PM PDT

I have installed SSL certs on IIS servers several times and, as far as I remember, the wait time between installing it and seeing that the site was secured from a web browser was almost instantaneous. I don't recall ever having to wait. It's possible my memory isn't as good as I think though. I've been wrong before, and I suppose it could happen again. :)

So that's why I'm asking this question. I am managing someone else's GoDaddy account and a support person allegedly attached an SSL cert. However it has been over 24 hours and there is no change in the website as it still shows the expired cert. Another support person now tells me it can take 48 hours because of DNS changes needing to propagate. That doesn't make sense to me though... the only DNS related things I think would be redirecting http to https or redirecting www to non www (or vice versa), right? Those things were already handled. Why would there need to be a wait time now?

Am I mistaken here, or am I talking to support people who don't know anything?

Web server for hosting/distributing debian/centos/arch etc packages

Posted: 01 May 2021 02:14 PM PDT

I have a package/software that was hosted on bintray up until recently when they decided to go dark. I don't want to repeat this experience and I want to host these packages myself. I need a web server or tutorial that I can follow to do this file hosting on my own which has the advantage of using my own domain name amongst others. I'm open to something like a service similar to github/gitlab/old bintray but with decent pricing and good features.

What I found so far is only half baked (rarely working for all distros), requires too much money, free but without custom domains or way too complicated to setup.

Any ideas?

Kubernetes route traffic from LoadBalancer to Ingress with correct hostname

Posted: 01 May 2021 01:22 PM PDT

I would like to route all traffic for a domain example.com to Kubernetes LoadBalancer. The LoadBalancer should route the traffic to specific Ingress where the correct hostname is defined. For example, the traffic for subdomain a.example.com should be navigated over LoadBalancer to Ingress Service where hostname is set to " a.example.com". Is there any solution?

I see domain on DNS server to the IP of Loadbalancer

apiVersion: apps/v1  kind: Deployment  metadata:    name: nginx-deployment  spec:    selector:      matchLabels:        app: nginx-app    replicas: 1    template:      metadata:        labels:          app: nginx-app      spec:        containers:        - name: nginx          image: nginx:1.13.12          ports:          - containerPort: 80  ---  apiVersion: v1  kind: Service  metadata:    labels:      app: nginx    name: nginx-service    namespace: default  spec:    type: NodePort    ports:      - port: 80    selector:      app: nginx-app  ---  apiVersion: networking.k8s.io/v1  kind: Ingress  metadata:    name: nginx-ingress    labels:      app: nginx  spec:    rules:    - host: "a.example.com"      http:        paths:        - pathType: Prefix          path: "/"          backend:            service:              name: nginx-service              port:                number: 80  ---  apiVersion: v1  kind: Service  metadata:    name: nginx-load-balancer    labels:      app: nginx  spec:    type: LoadBalancer    ports:    - port: 80      targetPort: 80      protocol: TCP      name: http  

Changing privileges without proper permissions on Raspberry OS with ansible ... Vulnerability?

Posted: 01 May 2021 03:31 PM PDT

While setting up a raspberry pi with ansible I made a mistake that I was able to fix, but I think I shouldn't have worked. This is what I did:

  • Flashed the latest Raspberry OS image to SD-card
  • booted the raspberry pi 4 with this SD
  • use an ansible playbook with these tasks:
    • create a group 'pwdless'
    • create two new users (my own name and a system user for ansible)
    • create authorized_keys files and insert the public keys into these files
    • add these users to the group 'pwdless'
    • remove the pi user

Now this last step failed, since I ran the playbook with the pi user. So I rebooted the raspberry and then ran the playbook again, now with the newly created ansible user. When I logged on as myself on the raspberry and tried to sudo I realized I had forgotten to edit the sudoers file to allow members of the pwdless group to actually become root without a password. At this point I thought I would have to start over, since I deleted the 'pi' user and neither myself or the ansible user was added to any other groups besides their own and pwdless (which was at this point just an ordinary group) I tried to run the playbook again with an extra step, to modify the sudoers file, which to my surprise, worked just fine after which I could perform ie 'sudo apt install byobu' with success.

I then realized the ansible user shouldn't have been able to delete the pi user to begin with.

Am I missing something here, or is this a vulnerability and if so, what should I do to report it ?

Emails don't work with Plesk on google cloud compute engine [closed]

Posted: 01 May 2021 02:55 PM PDT

I have a new instance with compute engine from google cloud servers. Everything is working, but no emails are received. All my sent emails stay in the queue of plesk. I have setup firewall rules with 2525 also I try 25 and 587.

In my woocommerce shop also no emails are sent to clients after order and also on my local iMac I can not send and receive any emails from the domain i host on that plesk webhost from compute engine.

I use Plesk with CentOS on the compute engine. Postfix is the mail server. All settings for email are done on Plesk.

What else do I have to do?

Unable to create selinux policy to allow drbdadm to run

Posted: 01 May 2021 09:38 PM PDT

In snmpd.conf I have

exec drbd_cstate /sbin/drbdadm cstate all  exec drbd_role /sbin/drbdadm role all  exec drbd_state /sbin/drbdadm dstate all  

With selinux set to permissive if I were to run the SNMP walk command (/usr/bin/snmpwalk -v 2c -c PUBLIC 192.168.1.10 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."drbd_cstate"'.1) and I got in the log:

type=AVC msg=audit(1619795855.717:214829): avc:  denied  { read } for  pid=30859 comm="drbdadm" name="node_id" dev="dm-0" ino=2360185 scontext=system_u:system_r:snmpd_t:s0 tcontext=unconfined_u:object_r:drbd_var_lib_t:s0 tclass=file permissive=1  type=AVC msg=audit(1619795855.717:214829): avc:  denied  { open } for  pid=30859 comm="drbdadm" path="/var/lib/drbd/node_id" dev="dm-0" ino=2360185 scontext=system_u:system_r:snmpd_t:s0 tcontext=unconfined_u:object_r:drbd_var_lib_t:s0 tclass=file permissive=1  type=SYSCALL msg=audit(1619795855.717:214829): arch=c000003e syscall=2 success=yes exit=4 a0=42eee0 a1=0 a2=1 a3=7fff53710560 items=0 ppid=27329 pid=30859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="drbdadm" exe="/usr/sbin/drbdadm" subj=system_u:system_r:snmpd_t:s0 key=(null)  type=PROCTITLE msg=audit(1619795855.717:214829): proctitle=2F7362696E2F6472626461646D0063737461746500616C6C  type=AVC msg=audit(1619795855.719:214830): avc:  denied  { create } for  pid=30860 comm="drbdsetup" scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:system_r:snmpd_t:s0 tclass=netlink_socket permissive=1  type=SYSCALL msg=audit(1619795855.719:214830): arch=c000003e syscall=41 success=yes exit=4 a0=10 a1=2 a2=10 a3=7ffe12bd3460 items=0 ppid=30859 pid=30860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="drbdsetup" exe="/usr/sbin/drbdsetup" subj=system_u:system_r:snmpd_t:s0 key=(null)  type=PROCTITLE msg=audit(1619795855.719:214830): proctitle=2F7362696E2F647262647365747570006373746174650072300031  type=AVC msg=audit(1619795855.720:214831): avc:  denied  { setopt } for  pid=30860 comm="drbdsetup" scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:system_r:snmpd_t:s0 tclass=netlink_socket permissive=1  type=SYSCALL msg=audit(1619795855.720:214831): arch=c000003e syscall=54 success=yes exit=0 a0=4 a1=1 a2=7 a3=7ffe12bd3a3c items=0 ppid=30859 pid=30860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="drbdsetup" exe="/usr/sbin/drbdsetup" subj=system_u:system_r:snmpd_t:s0 key=(null)  type=PROCTITLE msg=audit(1619795855.720:214831): proctitle=2F7362696E2F647262647365747570006373746174650072300031  type=AVC msg=audit(1619795855.720:214832): avc:  denied  { bind } for  pid=30860 comm="drbdsetup" scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:system_r:snmpd_t:s0 tclass=netlink_socket permissive=1  type=SYSCALL msg=audit(1619795855.720:214832): arch=c000003e syscall=49 success=yes exit=0 a0=4 a1=21dd030 a2=c a3=7ffe12bd3460 items=0 ppid=30859 pid=30860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="drbdsetup" exe="/usr/sbin/drbdsetup" subj=system_u:system_r:snmpd_t:s0 key=(null)  type=PROCTITLE msg=audit(1619795855.720:214832): proctitle=2F7362696E2F647262647365747570006373746174650072300031  type=AVC msg=audit(1619795855.720:214833): avc:  denied  { getattr } for  pid=30860 comm="drbdsetup" scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:system_r:snmpd_t:s0 tclass=netlink_socket permissive=1  type=SYSCALL msg=audit(1619795855.720:214833): arch=c000003e syscall=51 success=yes exit=0 a0=4 a1=21dd030 a2=7ffe12bd3a38 a3=7ffe12bd3460 items=0 ppid=30859 pid=30860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="drbdsetup" exe="/usr/sbin/drbdsetup" subj=system_u:system_r:snmpd_t:s0 key=(null)  type=PROCTITLE msg=audit(1619795855.720:214833): proctitle=2F7362696E2F647262647365747570006373746174650072300031  

When doing the snmpwalk the error I got back was NET-SNMP-EXTEND-MIB::nsExtendOutLine."drbd_cstate".1 = STRING: Creation of /var/lib/drbd/node_id failed: Permission denied

I used audit2allow to help create a policy for selinux so that it would allow me to run this command. The policy that it gave me was

module drbd_cstate 1.0;    require {          type drbd_var_lib_t;          type snmpd_t;          class netlink_socket { bind create getattr setopt };          class file { open read };  }    #============= snmpd_t ==============  allow snmpd_t drbd_var_lib_t:file { open read };  allow snmpd_t self:netlink_socket { bind create getattr setopt };  

Once I added my newly created module and I ran snmpwalk I got back

NET-SNMP-EXTEND-MIB::nsExtendOutLine."drbd_cstate".1 = STRING: <1>failed to send netlink message  

Doing a tail -f /var/log/audit/audit.log does not come back with anything. If at the time that I am doing the snmpwalk I do a tcpdump I see this going over the network Could not connect to 'drbd' generic netlink family in one packet and then <1>failed to send netlink message. If I then do setenforce=permissive everything magically works again. What am I doing wrong?

scp and sftp failing with "client_loop: send disconnect: Broken pipe" (MacOS 11.3 issue?)

Posted: 01 May 2021 09:00 PM PDT

As of 2 days ago, my attempts to scp files from my laptop to servers consistently fail for files larger than ~200 KB with the error "client_loop: send disconnect: Broken pipe" This coincided with an upgrade of my laptop to MacOS 11.3 (from 11 whatever-it-was-before-that).

$ dd if=/dev/urandom of=test.dat count=400 2> /dev/null && ls -l test.dat && scp test.dat $DST  -rw-r--r--  1 xxxx  staff  204800 Apr 28 11:27 test.dat  test.dat                                        0%    0     0.0KB/s   --:-- ETAclient_loop: send disconnect: Broken pipe  lost connection  

This is definitely new, as I'm using scp on an almost-daily basis and never had an issue before the update. This behavior is also visible on 2 different server architectures I've used as a destination (NAS and Raspberry Pi - to rule out a coincidental server misconfiguration) as well as using my Linux desktop as a client (no issues as well). What is also weird is that the problem shows for both stock SSH as well as for Homebrew SSH install, which hints of either SSH client configuration issue or a bug in the networking stack. I'm curious if anyone else is observing the same issue. sftp exhibits the same problem.

On the server side, here's what I get in the log:

May  1 23:27:27 myhost sshd[21774]: Bad packet length 116136902.  May  1 23:27:27 myhost sshd[21774]: ssh_dispatch_run_fatal: Connection from user pi XXX.XXX.XXX.XXX port 59948: Connection corrupted  

P.S. One [very imperfect] way to work around this issue is to use a sufficiently low bandwidth limit (-l option) but it is not great as it makes transfers glacially slow.

SSH Tunnel - channel 3: open failed: administratively prohibited: open failed

Posted: 01 May 2021 08:04 PM PDT

ssh -D 9090 user@host  

when I try to request web pages through socks5 (127.0.0.1:9090) I get:

channel 3: open failed: administratively prohibited: open failed  

server sshd_config:

#MaxStartups 10:30:100  PermitTunnel yes  #ChrootDirectory none  #VersionAddendum none    # no default banner path  #Banner none    # Accept locale-related environment variables  AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES  AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT  AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE  AcceptEnv XMODIFIERS    # override default of no subsystems  Subsystem       sftp    /usr/libexec/openssh/sftp-server    # Example of overriding settings on a per-user basis  #Match User anoncvs  #       X11Forwarding no          AllowTcpForwarding yes          PermitTTY yes  #       ForceCommand cvs server  #       PermitTunnel yes          GatewayPorts       yes  

what I am missing?

Cannot console into VMs in Hyper-V

Posted: 01 May 2021 02:26 PM PDT

When I attempt to view the console of a VM in Hyper-V, I get the following messages:

Video remoting was disconnected

and

Could not connect to the virtual machine. Try to connect again. If the problem persists, contact your system administrator. Would you like to try connecting again?

I have seen many "fixes" and tried all of them. However, after doing a lot of digging, I did finally find a way to turn on logging and I am seeing errors. Does anyone recognize these?

Things I have tried:

  • Rebooting
  • Uninstalling Hyper-V and resinstalling
  • Doing a Refresh on my Windows 10 machine (Hyper-V server) (This allowed it to work for a day then next day doing the same thing)
  • Verified IPv6 was enabled. Then found it was disabled via GPO, so I moved it to an OU that did not have GPO applied, created a GPO that turns it on, gpupdate /force and reboot
  • Disabled all Firewalls/Antivirus
  • Opened vmconnect.exe as admin manually
  • Compared GPOs/Security settings with my home instance of Hyper-V that works
  • Turned off Enhanced session Mode
  • Turned off RemoteFX
  • Deleted VM and recreated
  • Tried just a VM without any ISO, or OS, or anything bootable other than BIOS
  • Disconnected VM from vSwitch

I apologize ahead of time for all the logs. Please let me know if you need any more info and thanks for helping me!

Logs

VMConnect_Trace_20180511125822

Tracing Hyper-V Client version: 10.0.0.0. Rough build date (virtman file written): 04/29/2018 03:56:46    2018-05-11 12:58:23.435 [01] USER_ACTION_INITIATED VmConnect RdpViewerControl:ConnectCallback() Connecting with server full name: ET1SYS23 to RDP port 2179  2018-05-11 12:58:35.035 [01] USER_ACTION_INITIATED VmConnect RdpViewerControl:ConnectCallback() Connecting with server full name: ET1SYS23 to RDP port 2179  2018-05-11 12:58:37.211 [01] USER_ACTION_INITIATED VmConnect RdpViewerControl:ConnectCallback() Connecting with server full name: ET1SYS23 to RDP port 2179  2018-05-11 01:00:26.173 [01] USER_ACTION_INITIATED VmConnect RdpViewerControl:ConnectCallback() Connecting with server full name: ET1SYS23 to RDP port 2179  2018-05-11 01:09:33.589 [01] USER_ACTION_INITIATED VmConnect RdpViewerControl:ConnectCallback() Connecting with server full name: ET1SYS23 to RDP port 2179  2018-05-11 01:10:01.123 [01] ERROR VmConnect RdpViewerControl:Deactivate() QueryInterface for IOleInPlaceObject on the Rdp Control failed.  

Event Log (Microsoft-Windows-Hyper-V-VMMS-Admin)

    The required GPU resources could not be accessed. This server cannot run as a RemoteFX host without a GPU. Verify that the GPU is correctly installed.        Cannot load a checkpoint configuration: The process cannot access the file because it is being used by another process. (0x80070020). (Checkpoint ID FD216B1C-2BB2-48A2-966C-C97D2853094D)        Cannot load a checkpoint configuration: The process cannot access the file because it is being used by another process. (0x80070020). (Checkpoint ID FD216B1C-2BB2-48A2-966C-C97D2853094D)        The required GPU resources could not be accessed. This server cannot run as a RemoteFX host without a GPU. Verify that the GPU is correctly installed.  

Event Log (Microsoft-Windows-Hyper-V-VMMS-Analytics)

vm\service\resmgr\video\synth3dvideopoolrepository.cpp(884)\vmms.exe!00007FF639018178: (caller: 00007FF63900CD0F) Exception(1) tid(2728) 80004005 Unspecified error    [Synth3dPhysicalGPUManager::InitGpuStates()@1356] Caught exception: Unspecified error (0x80004005)    vm\service\fr\frctutilities.cpp(2223)\vmms.exe!00007FF63910DF69: (caller: 00007FF6394A6B0E) Exception(2) tid(ebc) 80070002 The system cannot find the file specified.    [FrCtUtilities::UpdateFRCTFilesTime()@2226] Caught exception: The system cannot find the file specified. (0x80070002)  

Event Log (Microsoft-Windows-Hyper-V-Worker-Admin)

The virtual machine Ubuntu 18.04 cannot load device Microsoft Synthetic Display Controller because there are no mutually supported protocol versions. The server version is 3.5 and the client version is 3.2 (Virtual machine ID B4714427-9B5E-4CD1-AE7D-5020D643EC55).    'Ubuntu 18.04' started successfully. (Virtual machine ID B4714427-9B5E-4CD1-AE7D-5020D643EC55)  

Event Log (Microsoft-Windows-Hyper-V-Worker-Analytics)

[Virtual machine  - ] [PIC ] Using unhandled command 3    [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] onecore\vm\ic\framework\icendpoint.cpp(1279)\vmiccore.dll!00007FF871CBCC3F: (caller: 00007FF871CBCEE8) LogHr(1) tid(24d4) 8007000D The data is invalid.      Msg:[Truncated or partial message header]     [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] onecore\vm\ic\framework\icendpoint.cpp(1288)\vmiccore.dll!00007FF871CBCCE5: (caller: 00007FF871CBCEE8) LogHr(2) tid(24d4) 8007000D The data is invalid.      Msg:[Processing failed with unprocessed portions; bytesRemaining = 8]     [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] onecore\vm\ic\framework\icendpoint.cpp(1288)\vmiccore.dll!00007FF871CBCCE5: (caller: 00007FF871CBCEE8) LogHr(4) tid(2520) 8007000D The data is invalid.      Msg:[Processing failed with unprocessed portions; bytesRemaining = 12]     [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] onecore\vm\ic\framework\icendpoint.cpp(1279)\vmiccore.dll!00007FF871CBCC3F: (caller: 00007FF871CBCEE8) LogHr(5) tid(2454) 8007000D The data is invalid.      Msg:[Truncated or partial message header]     [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] onecore\vm\ic\framework\icendpoint.cpp(1288)\vmiccore.dll!00007FF871CBCCE5: (caller: 00007FF871CBCEE8) LogHr(6) tid(2454) 8007000D The data is invalid.      Msg:[Processing failed with unprocessed portions; bytesRemaining = 12]     [Virtual machine B4714427-9B5E-4CD1-AE7D-5020D643EC55] Unable to find a connection in the connection map.  

memory cache is too high and going to use swap

Posted: 01 May 2021 01:08 PM PDT

i have a centos server with 32 g RAM and the state of it, is (free -m):

              total       used       free     shared    buffers     cached   Mem:         32071      31488        583          0        244      19329   -/+ buffers/cache:      11914      20157   Swap:        17399        287      17112  

the cached size is growth (between every restart app and clear cache)

after 5 hours that i post my question the memory status is :

             total       used       free     shared    buffers     cached  Mem:         32071      31850        221          0        194      20124  -/+ buffers/cache:      11530      20541  Swap:        17399        299      17100  

my java options is :

-Xms12g -Xmx12g -XX:MaxNewSize=6g -XX:NewSize=6g -XX:+UseParallelOldGC -XX:+UseParallelGC -XX:+UseTLAB -XX:MaxTenuringThreshold=15 -XX:+DisableExplicitGC  

as you see, cache size is too high and in the high load time on my server, the swap is used and the server is too slow (Unlike https://www.linuxatemyram.com/ , the memory is full and swap is used and my app is too slow)

i used java for service.

what can i do?

Temporarily disable aws CodePipeline?

Posted: 01 May 2021 09:36 PM PDT

I set up an AWS Codepipeline, which sources from Github, build by CodeBuild, and then deploy to ec2 instance via CodeDeploy. This ec2 instance is the development environment.

Since my team decided we won't be using this server/code for while, we stopped the ec2 instance. So I'd like to halt the Codepipeline temporarily, for now (CodeBuild and CodeDeploy is not free, even very small price...) However, I cannot find option for temporarily disabling codepipeline.

Question:

  • Can I disable codepipeline temporarily?

Command works on terminal but doesn't work with Ansible module

Posted: 01 May 2021 07:01 PM PDT

When I run the command alertmanager -config.file=/etc/alertmanager/alertmanager.yml on the terminal, it runs successfully. But when I run the following task against the host

- name: run alertmanager    become: yes    command: alertmanager -config.file=/etc/alertmanager/alertmanager.yml    tags: alertmanager  

it fails with the following error

fatal: [172.30.1.50]: FAILED! => {"changed": false, "cmd": "alertmanager -config.file=/etc/alertmanager/alertmanager.yml", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}  

I have alertmanager in my path variable and everything seems fine. Am I missing something in the command module?

PHP Warning: Missing boundary in multipart/form-data POST data in Unknown on line 0

Posted: 01 May 2021 03:11 PM PDT

I'm logging every error on my server into a file and time to time I'm solving the cause of them.

I have several log entry with the PHP warning:

[11-Jun-2017 10:49:27 Europe/Berlin] PHP Warning:  Missing boundary in multipart/form-data POST data in Unknown on line 0  [12-Jun-2017 08:58:27 Europe/Berlin] PHP Warning:  Missing boundary in multipart/form-data POST data in Unknown on line 0  [13-Jun-2017 05:57:19 Europe/Berlin] PHP Warning:  Missing boundary in multipart/form-data POST data in Unknown on line 0  [13-Jun-2017 05:58:01 Europe/Berlin] PHP Warning:  Missing boundary in multipart/form-data POST data in Unknown on line 0  [14-Jun-2017 05:42:27 Europe/Berlin] PHP Warning:  Missing boundary in multipart/form-data POST data in Unknown on line 0  

The problem that I have no clue what is the trigger for this. So my question is that how and where should adjust my log to store the request headers with this warning? Any other idea what could help to see what produce this warning?

Server is running: Ubuntu 16.04.2 Apache/2.4.18 PHP 7.0.18

Getting "Can't create/write to file '/var/lib/mysql/is_writable'" using docker (inside vagrant on OS X)

Posted: 01 May 2021 05:03 PM PDT

I am trying to use docker-compose/docker inside a vagrant machine hosted on OS X. Running 'docker-compose up' always fails with

mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)

I can manually create the file just fine, however. (Using touch and sudo -g vagrant touch)

Does anyone know where to look to debug this?


Log:

db_1  | Initializing database  db_1  | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)  db_1  | 2016-05-21T22:55:38.877522Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.  db_1  | 2016-05-21T22:55:38.877799Z 0 [ERROR] Aborting  

My docker-compose.yaml:

version: '2' services:   db:      privileged: true      image: mysql      volumes:        - "./.data/db:/var/lib/mysql"      restart: always      environment:        MYSQL_ROOT_PASSWORD: wordpress        MYSQL_DATABASE: wordpress        MYSQL_USER: wordpress        MYSQL_PASSWORD: wordpress  

My Vagrantfile:

# -*- mode: ruby -*-  # vi: set ft=ruby :    # All Vagrant configuration is done below. The "2" in Vagrant.configure  # configures the configuration version (we support older styles for  # backwards compatibility). Please don't change it unless you know what  # you're doing.  Vagrant.configure(2) do |config|    # The most common configuration options are documented and commented below.    # For a complete reference, please see the online documentation at    # https://docs.vagrantup.com.      # Every Vagrant development environment requires a box. You can search for    # boxes at https://atlas.hashicorp.com/search.    config.vm.box = "ubuntu/trusty64"    # config.vm.box = "debian/jessie64"      # Disable automatic box update checking. If you disable this, then    # boxes will only be checked for updates when the user runs    # `vagrant box outdated`. This is not recommended.    # config.vm.box_check_update = false      # Create a forwarded port mapping which allows access to a specific port    # within the machine from a port on the host machine. In the example below,    # accessing "localhost:8080" will access port 80 on the guest machine.    # config.vm.network "forwarded_port", guest: 80, host: 8080      # Create a private network, which allows host-only access to the machine    # using a specific IP.    # config.vm.network "private_network", ip: "192.168.33.10"      # Create a public network, which generally matched to bridged network.    # Bridged networks make the machine appear as another physical device on    # your network.    # config.vm.network "public_network"      # Share an additional folder to the guest VM. The first argument is    # the path on the host to the actual folder. The second argument is    # the path on the guest to mount the folder. And the optional third    # argument is a set of non-required options.    # config.vm.synced_folder "../data", "/vagrant_data"      # Provider-specific configuration so you can fine-tune various    # backing providers for Vagrant. These expose provider-specific options.    # Example for VirtualBox:    #    # config.vm.provider "virtualbox" do |vb|    #   # Display the VirtualBox GUI when booting the machine    #   vb.gui = true    #    #   # Customize the amount of memory on the VM:    #   vb.memory = "1024"    # end    #    # View the documentation for the provider you are using for more    # information on available options.      # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies    # such as FTP and Heroku are also available. See the documentation at    # https://docs.vagrantup.com/v2/push/atlas.html for more information.    # config.push.define "atlas" do |push|    #   push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"    # end      # Enable provisioning with a shell script. Additional provisioners such as    # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the    # documentation for more information about their specific syntax and use.    # config.vm.provision "shell", inline: <<-SHELL    #   sudo apt-get update    #   sudo apt-get install -y apache2    # SHELL      #####################################################################    # Custom Configuration      config.vm.define "dev" do |dev|        # if File.directory?("~/Dev")      #   dev.vm.synced_folder "~/Dev", "/vagrant/Dev"      # end      # custom: above does not work for symlinks      dev.vm.synced_folder "~/Dev", "/home/vagrant/Dev"  #    dev.vm.synced_folder "~/Dev/docker", "/docker"        dev.vm.provider "virtualbox" do |vb|        vb.gui = false        vb.memory = "2048"      end        dev.vm.provision "shell",                          run: "always",                          inline: <<-SHELL        pushd /vagrant/conf        chmod 755 setup.sh && ./setup.sh        popd      SHELL        dev.ssh.forward_x11 = true        # Install the caching plugin if you want to take advantage of the cache      # $ vagrant plugin install vagrant-cachier      if Vagrant.has_plugin?("vagrant-cachier")        # Configure cached packages to be shared between instances of the same base box.        # More info on http://fgrehm.viewdocs.io/vagrant-cachier/usage        config.cache.scope = :machine      end    end    end  

kinit pre-authentication fails

Posted: 01 May 2021 09:39 PM PDT

I have a CentOS 6.4 that someone set up a while back.

The admin is not sure how he installed it, but it works very well with Kerberos. I used authconfig to set the domain and the Kerberos settings.

I use ktpass on a windows domain controller and transferred the keytab. kinit -k works fine and I can use it for nfsv4 Kerberos mounts.

This is all pretty standard.

My problem is I have a customer that installed 6.7 with a base install and we cannot get kinit to work correctly.

We set these RPMs.

$ yum install krb5-libs krb5-workstation pam_krb5 \         cyrus-sasl-gssapi samba-* nfs-utils nfs4-acl-tools tcpdump -y  

Each attempt to get the system to pickup a tgt returns the generic.

$ kinit -k nfs/oldlabsystem  kinit: Preauthentication failed while getting initial credentials  

I went back and installed 6.4 in the same way and now 6.4 has the problem. I pulled a list of the rpms from my working 6.4 and used yum to install the same RPMs.

No luck here.

A network traces show as:

AS-REQ  AS-REP error-code: eRR-PREAUTH-REQUIRED (25)  AS-REQ  error-code: eRR-PREAUTH-FAILED (24)  

I went back and created new keys for my working system to make sure my method of generated the keys were correct. My working 6.4 system has no problem.

On the non-working 6.4 system, I can do a kinit username and supply the user password with no problem. But I cannot do a kinit -k if I do a kinit and supply the password set with kpass I end up with

kinit: Preauthentication failed while getting initial credentials  

In frustration I went back and created a user account and then generated a keytab from it. This also failed with the same error. Then on the user account in AD I turned off pre-authentication and then the kinit returned this.

[root@ ~]# kinit -k nfs/nfstestsystem.rockies.beta  kinit: Password incorrect while getting initial credentials  

I suspect the keytab is somehow getting corrupted or the OS is using the keys incorrectly.

My problem is these errors are so generic it is almost impossible to find anything of value on the message boards: lab-admin-krb5.

I tried to post these questions on the CentOS forums, but I did not get very far.

An strace of both kinit shows essentially the same calls to the same libraries.

Can't connect from LAN to PPTP VPN client

Posted: 01 May 2021 01:08 PM PDT

Our office has a number of incoming PPTP connections to our TP-Link TL-R600VPN router from a number of Windows Embedded for Point of Service (WEPOS) terminals located in our restaurants.

One restaurant location is successfully connecting to our router, however reaching it over the network from the office is proving troublesome. I've attempted to ping the machine whilst its connected to the router but I just get request timed out errors. This is an isolated case, all of our other terminals respond to ping and I can successfully connect to them via FTP and VNC (which is what we use the VPN for).

I assume therefore that this is a client issue, but I have no idea where to start looking. Can anyone provide some suggestions?

--- Edits in response to Tom --- Our main router (TP-Link TL-R600VPN) has a built in PPTP server with MPPE enabled. I'm unaware if it has GRE or not, but as all the other clients are connecting I assume yes.

All of the client machines are running Windows Embedded for Point of Service (WEPOS) 2009 and connect using the Windows dial-up (rasphone.exe) client. Windows firewall was enabled on the machine, and I have since disabled it for testing along with antivirus.

The router LAN is running on 192.168.7.0 with a subnet mask of 255.255.0.0 - VPN clients are assigned addresses in the 192.168.77.0 subnet. All of our client terminals sit behind NATs themselves (with PPTP pass-through enabled) and run on a 192.168.2.0 LAN subnet, with a subnet mask of 255.255.255.0.

Interestingly in further tests I've discovered that all of the VPN clients on the same subnet can connect to this one terminal without issue. It's only the workstations on the LAN that continuously time out. Running arp -a from any of the LAN based workstations displays the VPN client in question, but I can't get any further than this.

monit does not run script after "if does not exist" check

Posted: 01 May 2021 08:04 PM PDT

I'm having trouble with monit. I have setup monitoring with monit. However, I'm having trouble with a check for the process running. If the process is not running, i want to run an specific script that will create a pagerduty alert.

My Monit file looks like this:

check process "myapp" matching "myapp"    start program = "/usr/local/myapp start"    stop program  = "/usr/local/myapp stop"    if does not exist then exec "/bin/bash pagerduty_script 'MyApp Down' trigger"  

The pagerduty_script is just a wrapper that takes two arguments "string event" and action trigger

The script works. I've tested in terminal and it runs fine. events are actually created in pagerduty. However, it doesn't seem like monit is actually running it even though no process is running:

ps -ef | grep myapp  vagrant  23950 23136  0 17:40 pts/0    00:00:00 grep --color=auto myapp  

sudo monit status:

Process 'myapp'    status                            Execution failed    monitoring status                 Monitored    data collected                    Tue, 16 Sep 2014 17:40:11  

I don't understand why it works on the console, but monit doesn't actually generate the event. Any help would be greatly appreciated!

Windows Server 2012 / 2008 interface binding order and route metrics

Posted: 01 May 2021 09:39 PM PDT

Is there a relationship between the network adapter binding order and route table metric on Windows Server 2008 or 2012

I was told that the binding order is related to the routing metrics in the route table, such that interfaces with a higher binding order will have lower routing metrics (i.e. be more preferred)

I checked on a couple of windows 2008 / 2012 boxes but I can't see any relation, nor can I understand why the two should be related at all? Routing metric is a cost associated with following a particular network route, whereas binding order (in my understanding) is where network services will prefer to attach themselves to first?

SSH hangs after authentication

Posted: 01 May 2021 12:47 PM PDT

When logging in to one of my servers over ssh, it just hangs after authentication. This is the output on the client with -v.

OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: Applying options for *  debug1: Connecting to host1 [10.6.27.64] port 22.  debug1: Connection established.  debug1: identity file /home/user/.ssh/identity type -1  debug1: identity file /home/user/.ssh/id_rsa type 1  debug1: identity file /home/user/.ssh/id_dsa type -1  debug1: loaded 3 keys  debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3  debug1: match: OpenSSH_5.3 pat OpenSSH*  debug1: Enabling compatibility mode for protocol 2.0  debug1: Local version string SSH-2.0-OpenSSH_4.3  debug1: SSH2_MSG_KEXINIT sent  debug1: SSH2_MSG_KEXINIT received  debug1: kex: server->client aes128-ctr hmac-md5 none  debug1: kex: client->server aes128-ctr hmac-md5 none  debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent  debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP  debug1: SSH2_MSG_KEX_DH_GEX_INIT sent  debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY  debug1: Host 'host1' is known and matches the RSA host key.  debug1: Found key in /home/user/.ssh/known_hosts:172  debug1: ssh_rsa_verify: signature correct  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug1: SSH2_MSG_NEWKEYS received  debug1: SSH2_MSG_SERVICE_REQUEST sent  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password  debug1: Next authentication method: gssapi-with-mic  debug1: Unspecified GSS failure.  Minor code may provide more information  No credentials cache found    debug1: Unspecified GSS failure.  Minor code may provide more information  No credentials cache found    debug1: Unspecified GSS failure.  Minor code may provide more information  No credentials cache found    debug1: Next authentication method: publickey  debug1: Trying private key: /home/user/.ssh/identity  debug1: Offering public key: /home/user/.ssh/id_rsa  debug1: Server accepts key: pkalg ssh-rsa blen 277  debug1: read PEM private key done: type RSA  debug1: Authentication succeeded (publickey).  debug1: channel 0: new [client-session]  debug1: Entering interactive session.  debug1: Sending environment.  debug1: Sending env LANG = C  debug1: Sending env LC_ALL = C  Last login: Wed May 21 10:24:14 2014 from host2  This machine has been configured with kickstart  host1 in bcinf17 in bay 3 in rack D10-Mid  

And in /var/log/secure on the server I see this (lucky I still have a session open):

May 21 10:27:31 host1 sshd[12387]: Accepted publickey for user from 1.1.11.239 port 34135 ssh2  May 21 10:27:31 host1 sshd[12387]: pam_unix(sshd:session): session opened for user user by (uid=0)  

So nothing obvious going wrong. The client and server seem able to communicate. Nothing in /var/log/messages.

Plenty of disk space. Some paths are mounted (including home areas), but my still active shell can access them OK.

I can connect to other servers; only this one has the problem. I have tried restarting sshd. The config file for sshd looks like the default, so nothing in there. As far as I know, nothing has changed recently.

Trying to run a command (ssh host1 -t bash, or -t vi) also seem to hang, so don't think it's anything to do with my login scripts.

Have also tried logging in from other hosts in the same location and other locations, or from Windows via Putty, and logging in using password instead of key.

Not sure where else to look or what else to try.

This is a RHEL 6.4 server, 64 bit.

Setting a VM boot CD via PowerCLI

Posted: 01 May 2021 06:05 PM PDT

I have a sneaking suspicion that this may be a bug, but I'm definitely willing to entertain the possibility that I'm doing things wrong.

I have a VMware virtual machine in $vm, and I'm trying to assign a boot CD for it. $vm is powered off.

The documented method seems pretty straightforward:

Set-CDDrive -CD (Get-CDDrive -vm $vm) -IsoPath "[datastorename] \whatever.iso" -StartConnected 1  

Now, when I start the VM, it immediately tries to PXEboot. I turn off the machine, and in the vSphere client, I edit the VM's properties, go to "CD/DVD drive 1", and verify, "Device Status" has a checkmark next to "Connect at power on".

Here's the crazy thing. When I uncheck that box, then check it again, then start the VM, it boots from the ISO.

I've done it again and again, with the console open, with it closed, and every time, I can set the StartConnected flag on the CLI, and the GUI reflects the setting, but only after I mark the checkbox manually does it actually boot from the ISO.

Is there a step that I'm neglecting to perform in PowerCLI to get this setting to "take"?

nginx lua: os.execute waitpid() failed (10: No child processes)

Posted: 01 May 2021 07:01 PM PDT

So, I'm trying to execute a script on every request. I know how that sounds, this is for development environment.

I've added this to my nginx config access_by_lua_file "/opt/nginx/git-magic.lua";

git-magic.lua contains local status = os.execute('/opt/nginx/git-magic.sh')

And git-magic.sh contains: echo hello >> /tmp/git-magic

The issue is:

Whenever I hit any URL, I get the following in the nginx error log: 2012/09/27 15:35:48 [alert] 3241#0: waitpid() failed (10: No child processes)

Any ideas what I might be doing wrong?

Possible for linux bridge to intercept traffic?

Posted: 01 May 2021 03:11 PM PDT

I have a linux machine setup as a bridge between a client and a server;

brctl addbr0  brctl addif br0 eth1  brctl addif br0 eth2  ifconfig eth1 0.0.0.0  ifconfig eth2 0.0.0.0  ip link set br0 up  

I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables.

Here is the rest of my setup:

//set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this  ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP    //set iptables to forward this traffic to my app listening on port 8080  iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1  iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1    //once the flows are marked, have them delivered locally via loopback interface  ip rule add fwmark 1/1 table 1  ip route add local 0.0.0.0/0 dev lo table 1    //enable ip packet forwarding  echo 1 > /proc/sys/net/ipv4/ip_forward  

However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables.

Secondly, are there any other alternatives I should investigate?

Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

Dovecot user lookup fails when using username@domain format

Posted: 01 May 2021 05:03 PM PDT

I have Dovecot v2.0.11 installed on a FreeBSD server and the user lookups for incoming email addresses are failing, but lookups for system users is successful.

Dovecot is setup to use system users, so my dovecot.conf has

 userdb {    driver = passwd  }  

and

passdb {    driver = passwd  }  

I have auth debug enabled.

For example, I have a user called webmaster, and using doveadm user for "webmaster" works as follows:

#doveadm user webmaster  userdb: webmaster    system_groups_user: webmaster    uid       : 1020    gid       : 1020    home      : /home/webmaster  

However using doveadm user to lookup webmaster@myregisteredname.com fails as follows:

# doveadm user webmaster@myregisteredname.com  userdb lookup: user webmaster@myregisteredname.com doesn't exist  

This is resulting in incoming mail for webmaster@myregisteredname.com to bounce with an "unknown user" error.

Here's the failure logged in /var/log/maillog:

Apr 16 20:13:35 www dovecot: auth: passwd(webmaster@myregisteredname.com): unknown user  

Here's the failure logged in /var/log/debug.log:

Apr 16 20:13:35 www dovecot: auth: Debug: master in: USER       1       webmaster@myregisteredname.com  service=doveadm  Apr 16 20:13:35 www dovecot: auth: Debug: passwd(webmaster@myregisteredname.com): lookup  Apr 16 20:13:35 www dovecot: auth: Debug: master out: NOTFOUND  1  

The users and their home directories were imported from another server and the users were setup using the vipw tool. I'm sure there's someting I missed on the import that's not "linking" the system user with the dovecot lookup.

Any ideas about what that something may be?

EDIT: Using BillThor's advice, I updated dovecot.conf as follows:

#doveconf -n passdb userdb  passdb {    args = username_format=%n    driver = passwd  }  userdb {    args = username_format=%n    driver = passwd  }  

However, now, doveadm user fails in a different fashion:

#doveadm user webmaster@pantronx.com  doveadm(root): Error: userdb lookup(webmaster@myregisteredname.com): Disconnected unexpectedly  doveadm(root): Fatal: userdb lookup failed for webmaster@myregisteredname.com  

And, it no longer works for users without a domain:

#doveadm user webmaster  doveadm(root): Error: userdb lookup(webmaster): Disconnected unexpectedly  doveadm(root): Fatal: userdb lookup failed for webmaster  

When I get the above messages, the following is in /var/log/maillog:

Apr 17 17:30:02 www dovecot: auth: Fatal: passdb passwd: Unknown setting: username_format=%u  Apr 17 17:30:02 www dovecot: master: Error: service(auth): command startup failed, throttling  

How to auto-cc a system email account any time a user creates an appointment

Posted: 01 May 2021 06:05 PM PDT

I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short:

Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar?

  • Is it possible using rules?
  • Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side
  • Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access
  • Is there any other way, perhaps using group policies?

My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile).

An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft.

Can you confirm that my thinking is correct or do you have any other solutions?

ORA-12705: invalid or unknown NLS parameter value specified

Posted: 01 May 2021 05:03 PM PDT

I have a J2EE application hosted on JBoss on Linux.

When I try to access the application, I see following error in the server.log file.

ORA-12705: invalid or unknown NLS parameter value specified  

When I point the same JBoss instance to a different schema, the application works fine.

I tried to go through a few forums and found that the NLS parameter settings are fine.

Can anyone help.

  • Jboss version = 4.0.2
  • DB version = oracle 10.2

output of locale command on linux

$ locale  LANG=en_US.UTF-8  LC_CTYPE="en_US.UTF-8"  LC_NUMERIC="en_US.UTF-8"  LC_TIME="en_US.UTF-8"  LC_COLLATE="en_US.UTF-8"  LC_MONETARY="en_US.UTF-8"  LC_MESSAGES="en_US.UTF-8"  LC_PAPER="en_US.UTF-8"  LC_NAME="en_US.UTF-8"  LC_ADDRESS="en_US.UTF-8"  LC_TELEPHONE="en_US.UTF-8"  LC_MEASUREMENT="en_US.UTF-8"  LC_IDENTIFICATION="en_US.UTF-8"  LC_ALL=  

No comments:

Post a Comment