amazon ec2 - JGroups ec2 cluster fails to connect (times out) with Hibernate Search / Infinispan setup -
i'm trying set distributed hibernate search (5.5.4) cluster on elastic beanstalk (tomcat8) environment, using infinispan (8.2.4) , jgroups.
i'm stuck on issue node can't connect existing cluster, , times out trying connect.
starting jgroups channel ispn variable "${jgroups.s3.pre_signed_delete_url}" in s3_ping not substituted; pre_signed_delete_url removed properties variable "${jgroups.s3.prefix}" in s3_ping not substituted; prefix removed properties variable "${jgroups.s3.pre_signed_put_url}" in s3_ping not substituted; pre_signed_put_url removed properties ip-172-31-24-216-1799: join(ip-172-31-24-216-1799) sent ip-172-31-14-33-238 timed out (after 5000 ms), on try 1 ip-172-31-24-216-1799: join(ip-172-31-24-216-1799) sent ip-172-31-14-33-238 timed out (after 5000 ms), on try 2 ... ip-172-31-24-216-1799: join(ip-172-31-24-216-1799) sent ip-172-31-14-33-238 timed out (after 5000 ms), on try 10 ip-172-31-24-216-1799: many join attempts (10): becoming singleton ispn000094: received new cluster view channel ispn: [ip-172-31-24-216-channel ispn local address ip-172-31-24-216-1799, physical addresses [127.0.0.1:7800]
i have enabled types of inbound traffic within elastic beanstalk security group, , can ping other nodes in group using internal ip addresses.
this infinispan.xml file
<infinispan xmlns:xsi="http://www.w3.org/2001/xmlschema-instance" xsi:schemalocation="urn:infinispan:config:8.2 http://infinispan.org/schemas/infinispan-config-8.2.xsd" xmlns="urn:infinispan:config:8.2"> <jgroups> <stack-file name="default-jgroups-ec2" path="default-configs/default-jgroups-ec2.xml"/> </jgroups> <cache-container name="hibernatesearch" default-cache="default" statistics="false" shutdown-hook="dont_register"> <transport stack="default-jgroups-ec2"/> <!-- duplicate domains allowed multiple deployments default configuration of hibernate search applications work - if possible better use jndi share cachemanager across applications --> <jmx duplicate-domains="true"/> <!-- *************************************** --> <!-- cache store lucene's file metadata --> <!-- *************************************** --> <replicated-cache name="luceneindexesmetadata" mode="sync" remote-timeout="25000"> <locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/> <transaction mode="none"/> <eviction max-entries="-1" strategy="none"/> <expiration max-idle="-1"/> <persistence> <file-store path="luceneindexes/metadata" preload="true" /> </persistence> <indexing index="none"/> <state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/> </replicated-cache> <!-- **************************** --> <!-- cache store lucene data --> <!-- **************************** --> <distributed-cache name="luceneindexesdata" mode="sync" remote-timeout="25000"> <locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/> <transaction mode="none"/> <eviction max-entries="-1" strategy="none"/> <expiration max-idle="-1"/> <persistence> <file-store path="luceneindexes/data" /> </persistence> <indexing index="none"/> <state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/> </distributed-cache> <!-- ***************************** --> <!-- cache store lucene locks --> <!-- ***************************** --> <replicated-cache name="luceneindexeslocking" mode="sync" remote-timeout="25000"> <locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/> <transaction mode="none"/> <eviction max-entries="-1" strategy="none"/> <expiration max-idle="-1"/> <persistence> <file-store path="luceneindexes/locking" /> </persistence> <indexing index="none"/> <state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/> </replicated-cache> </cache-container> </infinispan>
and jgroups config file default ec2 config packaged infinispan default-jgroups-ec2.xml
does have idea of may have gone wrong, or need working?
your local address 127.0.0.1:7800, default. not work if need talk other nodes.
Comments
Post a Comment