<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165188#M33500</link>
    <description>&lt;P&gt;Is there any PowerPC HW sizing guide for running Splunk Enterprise (all roles in distributed search) ??&lt;/P&gt;

&lt;P&gt;Or even with Red Hat on PowerPC ?&lt;/P&gt;</description>
    <pubDate>Sat, 25 Apr 2015 00:41:55 GMT</pubDate>
    <dc:creator>theunf</dc:creator>
    <dc:date>2015-04-25T00:41:55Z</dc:date>
    <item>
      <title>Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165185#M33497</link>
      <description>&lt;P&gt;I have a AIX 7.1 machine setup as a forwarder running Splunk 6.1.2. Splunk keeps crashing almost and I need help to figure out what is causing the crash. &lt;BR /&gt;
Below the Splunk crash log.&lt;/P&gt;

&lt;P&gt;Received fatal signal 11 (Segmentation fault).&lt;BR /&gt;
 Cause:&lt;BR /&gt;
   No memory mapped at address [0x352D32302D31372D].&lt;BR /&gt;
 Crashing thread: MainTailingThread&lt;BR /&gt;
 Registers:&lt;BR /&gt;
    IAR:  [0x0900000000520C28] ?&lt;BR /&gt;
    MSR:  [0xA00000000000D032]&lt;BR /&gt;
    R0:  [0x0900000000520A24]&lt;BR /&gt;
    R1:  [0x0000000116A515A0]&lt;BR /&gt;
    R2:  [0x09001000A0396C80]&lt;BR /&gt;
    R3:  [0x352D32302D31372D]&lt;BR /&gt;
    R4:  [0x352D32302D31372E]&lt;BR /&gt;
    R5:  [0x0000000116A52D90]&lt;BR /&gt;
    R6:  [0x0000000000000000]&lt;BR /&gt;
    R7:  [0x0000000116A52DF8]&lt;BR /&gt;
    R8:  [0x0000000000000028]&lt;BR /&gt;
    R9:  [0x0000000116A51898]&lt;BR /&gt;
    R10:  [0x0000000000000001]&lt;BR /&gt;
    R11:  [0x0000000000000000]&lt;BR /&gt;
    R12:  [0x09001000A0391928]&lt;BR /&gt;
    R13:  [0x0000000116A5D800]&lt;BR /&gt;
    R14:  [0x000000011305D0C0]&lt;BR /&gt;
    R15:  [0x000000000008B4C0]&lt;BR /&gt;
    R16:  [0x0000000112D17AA0]&lt;BR /&gt;
    R17:  [0x0000000000000080]&lt;BR /&gt;
    R18:  [0x000000011308B660]&lt;BR /&gt;
    R19:  [0x000000000005CF20]&lt;BR /&gt;
    R20:  [0x0000000000000000]&lt;BR /&gt;
    R21:  [0x0000000000000000]&lt;BR /&gt;
    R22:  [0x0000000000000000]&lt;BR /&gt;
    R23:  [0x0000000000000000]&lt;BR /&gt;
    R24:  [0x0000000112D17AA0]&lt;BR /&gt;
    R25:  [0x0000000000000080]&lt;BR /&gt;
    R26:  [0x000000011308B660]&lt;BR /&gt;
    R27:  [0x00000001123A4650]&lt;BR /&gt;
    R28:  [0x0000000116A53E40]&lt;BR /&gt;
    R29:  [0x0000000116A53E20]&lt;BR /&gt;
    R30:  [0x0000000116A52C60]&lt;BR /&gt;
    R31:  [0x0000000117833030]&lt;BR /&gt;
    CR:  [0x0000000044000059]&lt;BR /&gt;
    XER:  [0x0000000000000008]&lt;BR /&gt;
    LR:  [0x0900000000520C24]&lt;BR /&gt;
    CTR:  [0x0900000000520A00]&lt;/P&gt;

&lt;P&gt;OS: AIX&lt;BR /&gt;
 Arch: PowerPC&lt;/P&gt;

&lt;P&gt;Backtrace:&lt;BR /&gt;
+++PARALLEL TOOLS CONSORTIUM LIGHTWEIGHT COREFILE FORMAT version 1.0&lt;BR /&gt;
+++LCB 1.0 Sun Aug 17 17:30:04 2014 Generated by IBM AIX 7.1&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 29&lt;BR /&gt;
***FAULT "SIGSEGV - Segmentation violation"&lt;BR /&gt;
+++STACK&lt;BR /&gt;
&lt;EM&gt;Tidy&lt;/EM&gt;&lt;EM&gt;Q3_3std7_LFS_ON12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc&lt;/EM&gt;&lt;EM&gt;Fb@AF278_62 : 0x00000028&lt;BR /&gt;
__dt&lt;/EM&gt;&lt;EM&gt;Q3_3std7_LFS_ON12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc&lt;/EM&gt;&lt;EM&gt;Fv : 0x00000020&lt;BR /&gt;
__dt&lt;/EM&gt;&lt;EM&gt;3StrFv : 0x00000050&lt;BR /&gt;
_Destroy&lt;/EM&gt;&lt;EM&gt;3stdH3Str_P3Str_v : 0x00000018&lt;BR /&gt;
destroy&lt;/EM&gt;&lt;EM&gt;Q2_3std9allocatorXT3Str_FP3Str : 0x00000018&lt;BR /&gt;
_Destroy&lt;/EM&gt;&lt;EM&gt;Q2_3std6vectorXT3StrTQ2_3std9allocatorXT3Str&lt;/EM&gt;&lt;EM&gt;FP3StrT1 : 0x00000030&lt;BR /&gt;
insert&lt;/EM&gt;&lt;EM&gt;Q2_3std6vectorXT3StrTQ2_3std9allocatorXT3Str&lt;/EM&gt;&lt;EM&gt;FQ2_3std6_PtritXT3StrTlTP3StrTR3StrTP3StrTR3Str_UlRC3Str : 0x00000290&lt;BR /&gt;
insert&lt;/EM&gt;&lt;EM&gt;Q2_3std6vectorXT3StrTQ2_3std9allocatorXT3Str&lt;/EM&gt;&lt;EM&gt;FQ2_3std6_PtritXT3StrTlTP3StrTR3StrTP3StrTR3Str_RC3Str : 0x00000098&lt;BR /&gt;
push_back&lt;/EM&gt;&lt;EM&gt;Q2_3std6vectorXT3StrTQ2_3std9allocatorXT3Str&lt;/EM&gt;&lt;EM&gt;FRC3Str : 0x0000007c&lt;BR /&gt;
push_back&lt;/EM&gt;&lt;EM&gt;9StrVectorFRC3Str : 0x0000001c&lt;BR /&gt;
lineBreak&lt;/EM&gt;&lt;EM&gt;FRC10StrSegmentR9StrVectorR3Str : 0x00000118&lt;BR /&gt;
getLines&lt;/EM&gt;&lt;EM&gt;21FileClassifierManagerFRC8PathnameP3StrUlR9StrVectorR3StrPUlT6 : 0x00000308&lt;BR /&gt;
_getFileType&lt;/EM&gt;&lt;EM&gt;21FileClassifierManagerFP13PropertiesMapRC8PathnameR9StrVectorRbT4PC3StrUl : 0x00000a70&lt;BR /&gt;
getFileType&lt;/EM&gt;&lt;EM&gt;21FileClassifierManagerFP13PropertiesMapRC8PathnamebPC3StrUl : 0x0000009c&lt;BR /&gt;
classifySource&lt;/EM&gt;&lt;EM&gt;10TailReaderCFR15CowPipelineDataRC8PathnameR3StrN23b : 0x00000194&lt;BR /&gt;
setupSourcetype&lt;/EM&gt;&lt;EM&gt;10TailReaderFR15WatchedTailFileRQ2_7Tailing10FileStatus : 0x0000020c&lt;BR /&gt;
readFile&lt;/EM&gt;&lt;EM&gt;10TailReaderFR15WatchedTailFileP11TailWatcherP11BatchReader : 0x000001b8&lt;BR /&gt;
readFile&lt;/EM&gt;&lt;EM&gt;11TailWatcherFR15WatchedTailFile : 0x0000024c&lt;BR /&gt;
fileChanged&lt;/EM&gt;&lt;EM&gt;11TailWatcherFP16WatchedFileStateRC7Timeval : 0x00000d0c&lt;BR /&gt;
callFileChanged&lt;/EM&gt;&lt;EM&gt;30FilesystemChangeInternalWorkerFR7TimevalP16WatchedFileState : 0x00000090&lt;BR /&gt;
when_expired&lt;/EM&gt;&lt;EM&gt;30FilesystemChangeInternalWorkerFRUL : 0x00000368&lt;BR /&gt;
runExpiredTimeouts&lt;/EM&gt;&lt;EM&gt;11TimeoutHeapFR7Timeval : 0x000001ac&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;9EventLoopFv : 0x00000094&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;11TailWatcherFv : 0x00000118&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;13TailingThreadFv : 0x0000020c&lt;BR /&gt;
callMain&lt;/EM&gt;_6ThreadFPv : 0x000000b4&lt;BR /&gt;
_pthread_body : 0x000000f0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 29&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 1&lt;BR /&gt;
+++STACK&lt;BR /&gt;
poll_&lt;EM&gt;FPvUll : 0x00000024&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;9EventLoopFv : 0x0000016c&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;10MainThreadFv : 0x000000a0&lt;BR /&gt;
run&lt;/EM&gt;_10MainThreadFv : 0x00000030&lt;BR /&gt;
main : 0x00002aa0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 1&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 2&lt;BR /&gt;
+++STACK&lt;BR /&gt;
poll_&lt;EM&gt;FPvUll : 0x00000024&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;9EventLoopFv : 0x0000016c&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;19ProcessRunnerThreadFv : 0x00000058&lt;BR /&gt;
callMain&lt;/EM&gt;_6ThreadFPv : 0x000000b4&lt;BR /&gt;
_pthread_body : 0x000000f0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 2&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 3&lt;BR /&gt;
+++STACK&lt;BR /&gt;
&lt;EM&gt;event_wait : 0x00000344&lt;BR /&gt;
_cond_wait_local : 0x0000035c&lt;BR /&gt;
_cond_wait : 0x000000c8&lt;BR /&gt;
pthread_cond_timedwait : 0x00000200&lt;BR /&gt;
wait&lt;/EM&gt;&lt;EM&gt;16PthreadConditionFR14ConditionMutexRC20ConditionWaitTimeout : 0x00000114&lt;BR /&gt;
wait&lt;/EM&gt;&lt;EM&gt;16PthreadConditionFR20ScopedConditionMutexRC20ConditionWaitTimeout : 0x00000028&lt;BR /&gt;
remove&lt;/EM&gt;&lt;EM&gt;15PersistentQueueFR15CowPipelineDataRC20ConditionWaitTimeout : 0x000000a4&lt;BR /&gt;
remove&lt;/EM&gt;&lt;EM&gt;21ProducerConsumerQueueFR15CowPipelineDataRC20ConditionWaitTimeout : 0x00000044&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;18QueueServiceThreadFv : 0x00000074&lt;BR /&gt;
callMain&lt;/EM&gt;_6ThreadFPv : 0x000000b4&lt;BR /&gt;
_pthread_body : 0x000000f0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 3&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 4&lt;BR /&gt;
+++STACK&lt;BR /&gt;
poll_&lt;EM&gt;FPvUll : 0x00000024&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;9EventLoopFv : 0x0000016c&lt;BR /&gt;
run&lt;/EM&gt;&lt;EM&gt;14TcpChannelLoopFv : 0x00000014&lt;BR /&gt;
go&lt;/EM&gt;&lt;EM&gt;17SplunkdHttpServerFv : 0x00000050&lt;BR /&gt;
go&lt;/EM&gt;&lt;EM&gt;20SingleRestHttpServerFv : 0x00000020&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;18HTTPDispatchThreadFv : 0x00000264&lt;BR /&gt;
callMain&lt;/EM&gt;_6ThreadFPv : 0x000000b4&lt;BR /&gt;
_pthread_body : 0x000000f0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 4&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 5&lt;BR /&gt;
+++STACK&lt;BR /&gt;
&lt;EM&gt;event_wait : 0x00000344&lt;BR /&gt;
_cond_wait_local : 0x0000035c&lt;BR /&gt;
_cond_wait : 0x000000c8&lt;BR /&gt;
pthread_cond_timedwait : 0x00000200&lt;BR /&gt;
wait&lt;/EM&gt;&lt;EM&gt;16PthreadConditionFR14ConditionMutexRC20ConditionWaitTimeout : 0x00000114&lt;BR /&gt;
main&lt;/EM&gt;&lt;EM&gt;23HttpClientPollingThreadFv : 0x0000087c&lt;BR /&gt;
callMain&lt;/EM&gt;_6ThreadFPv : 0x000000b4&lt;BR /&gt;
_pthread_body : 0x000000f0&lt;BR /&gt;
---STACK&lt;BR /&gt;
---ID Node 0 Process 10420352 Thread 5&lt;/P&gt;

&lt;P&gt;+++ID Node 0 Process 10420352 Thread 6&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 17:53:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165185#M33497</guid>
      <dc:creator>edwardman88</dc:creator>
      <dc:date>2020-09-28T17:53:44Z</dc:date>
    </item>
    <item>
      <title>Re: Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165186#M33498</link>
      <description>&lt;P&gt;You should check your data segment size ,  ulimit -d to make sure that this is set inline with what splunk asks for.  By default on AIX systems this is set too low and it can create issues for splunk.  Usually when this happens you will see lots of bad allocation error messages in the logs that look like the following&lt;/P&gt;

&lt;P&gt;ERROR PropertiesMapConfig - Failed to save stanza &lt;A href="user:%20,%20app:%20,%20root:%20/opt/splunkforwarder/etc" target="_blank"&gt;/var/adm/sudo.log_Mon_Sep_22_16:37:27_2014_1998275973&lt;/A&gt; to app learned: bad allocation &lt;/P&gt;

&lt;P&gt;The data segment size (ulimit -d). With Splunk 4.2+, increase the value to at least 1 GB = 1073741824 bytes. &lt;/P&gt;

&lt;P&gt;&lt;A href="http://docs.splunk.com/Documentation/Splunk/6.1.3/Troubleshooting/ulimitErrors" target="_blank"&gt;http://docs.splunk.com/Documentation/Splunk/6.1.3/Troubleshooting/ulimitErrors&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 18:06:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165186#M33498</guid>
      <dc:creator>kserra_splunk</dc:creator>
      <dc:date>2020-09-28T18:06:35Z</dc:date>
    </item>
    <item>
      <title>Re: Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165187#M33499</link>
      <description>&lt;P&gt;I would like to expand on this answer if I may.  As Kyle mentioned AIX ulimit defaults are not over generous.  Typically if your Splunk AIX instance crashes soon after startup, the first place to look for clues is $SPLUNK_HOME/var/log/splunk/splunkd.log&lt;/P&gt;

&lt;P&gt;Look for "Splunk may not work due to ....." errors&lt;/P&gt;

&lt;P&gt;02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: data segment size: 134217728 bytes [hard maximum: unlimited]&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 WARN  ulimit - Splunk may not work due to small data segment limit!&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: resident memory size: 33554432 bytes [hard maximum: unlimited]&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 WARN  ulimit - Splunk may not work due to small resident memory size limit!&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: stack size: 33554432 bytes [hard maximum: 4294967296 bytes]&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited]&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: data file size: unlimited&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: open files: 4096 files [hard maximum: unlimited]&lt;BR /&gt;
02-25-2015 13:23:42.953 +0100 INFO  ulimit - Limit: cpu time: unlimited&lt;/P&gt;

&lt;P&gt;The Data Segment Size (ulimit -d) needs to be at least 1 GB (1073741824 bytes)&lt;/P&gt;

&lt;P&gt;The Resident Memory Size (ulimit -m) needs to be at least :&lt;BR /&gt;
512MB (536870912 bytes) for a Universal Forwarder&lt;BR /&gt;
1 GB (1073741824 bytes) for a Indexer&lt;/P&gt;

&lt;P&gt;Max No Of Open Files (ulimit -n) should be increased to at least 8192&lt;/P&gt;

&lt;P&gt;Datafile size (ulimit -f) may be set at unlimited as the max size of file is dictated by the OS / Filesystem&lt;/P&gt;

&lt;P&gt;These values are set on a per user basis in /etc/security/limits (or via smit chuser)&lt;BR /&gt;
It gets a little confusing because some of the values in /etc/security/limits are in 512b Blocks, the values from ulimit are in kB and the values in splunkd.log are in bytes.&lt;/P&gt;

&lt;P&gt;Lets have a look at a worked example&lt;/P&gt;

&lt;P&gt;A Worked Example &lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Login as root&lt;/LI&gt;
&lt;LI&gt;# smitty chuser
Change the values for
Soft DATA segment                        [2097152]
Soft RSS size                                   [1048576]
Soft NOFILE descriptors               [8192]
Soft FILE size                                   [-1]&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;Save and commit changes.&lt;/P&gt;

&lt;P&gt;This basically just edits /etc/security/lmits:&lt;/P&gt;

&lt;P&gt;...&lt;BR /&gt;
*&lt;BR /&gt;
* Sizes are in multiples of 512 byte blocks, CPU time is in seconds&lt;BR /&gt;
*&lt;BR /&gt;
* fsize - soft file size in blocks&lt;BR /&gt;
* core - soft core file size in blocks&lt;BR /&gt;
* cpu - soft per process CPU time limit in seconds&lt;BR /&gt;
* data - soft data segment size in blocks&lt;BR /&gt;
* stack - soft stack segment size in blocks&lt;BR /&gt;
* rss - soft real memory usage in blocks&lt;BR /&gt;
* nofiles - soft file descriptor limit&lt;BR /&gt;
* fsize_hard - hard file size in blocks&lt;BR /&gt;
* core_hard - hard core file size in blocks&lt;BR /&gt;
* cpu_hard - hard per process CPU time limit in seconds&lt;BR /&gt;
* data_hard - hard data segment size in blocks&lt;BR /&gt;
* stack_hard - hard stack segment size in blocks&lt;BR /&gt;
* rss_hard - hard real memory usage in blocks&lt;BR /&gt;
* nofiles_hard - hard file descriptor limit&lt;BR /&gt;
*&lt;BR /&gt;
* The following table contains the default hard values if the&lt;BR /&gt;
* hard values are not explicitly defined:&lt;BR /&gt;
*&lt;BR /&gt;
* Attribute Value&lt;BR /&gt;
* ========== ============&lt;BR /&gt;
* fsize_hard set to fsize&lt;BR /&gt;
* cpu_hard set to cpu&lt;BR /&gt;
* core_hard -1&lt;BR /&gt;
* data_hard -1&lt;BR /&gt;
* stack_hard 8388608&lt;BR /&gt;
* rss_hard -1&lt;BR /&gt;
* nofiles_hard -1&lt;BR /&gt;
*&lt;BR /&gt;
* NOTE: A value of -1 implies "unlimited"&lt;BR /&gt;
*&lt;/P&gt;

&lt;P&gt;default:&lt;BR /&gt;
 fsize = 2097151&lt;BR /&gt;
core = 2097151&lt;BR /&gt;
cpu = -1&lt;BR /&gt;
data = 262144&lt;BR /&gt;
rss = 65536&lt;BR /&gt;
stack = 65536&lt;BR /&gt;
nofiles = 2000&lt;/P&gt;

&lt;P&gt;root:&lt;BR /&gt;
data = 2097152&lt;BR /&gt;
rss = 1048576&lt;BR /&gt;
nofiles = 8192&lt;BR /&gt;
fsize = -1&lt;/P&gt;

&lt;P&gt;daemon:&lt;/P&gt;

&lt;P&gt;...&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;&lt;P&gt;Logout root&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;Login root (to pick up the changes)&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;H1&gt;ulimit -a&lt;/H1&gt;

&lt;P&gt;time(seconds) unlimited&lt;BR /&gt;
file(blocks) unlimited&lt;BR /&gt;
data(kbytes) 1048576&lt;BR /&gt;
stack(kbytes) 32768&lt;BR /&gt;
memory(kbytes) 524288&lt;BR /&gt;
coredump(blocks) 2097151&lt;BR /&gt;
nofiles(descriptors) 8192&lt;BR /&gt;
threads(per process) unlimited&lt;BR /&gt;
processes(per user) unlimited&lt;/P&gt;&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;The values look correct &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;&lt;P&gt;start splunk &lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;Check $SPLUNK_HOME/var/log/splunk/splunkd.log&lt;/P&gt;&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;....&lt;BR /&gt;
03-31-2015 02:10:27.952 -0700 INFO LicenseMgr - Tracker init complete...&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: virtual address space size: unlimited&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: data segment size: 1073741824 bytes [hard maximum: unlimited]&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: resident memory size: 536870912 bytes [hard maximum: unlimited]&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: stack size: 33554432 bytes [hard maximum: 4294967296 bytes]&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited]&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: data file size: unlimited&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: open files: 8192 files [hard maximum: unlimited]&lt;BR /&gt;
03-31-2015 02:10:27.987 -0700 INFO ulimit - Limit: cpu time: unlimited&lt;BR /&gt;
03-31-2015 02:10:27.993 -0700 INFO loader - Splunkd starting (build 245427).&lt;BR /&gt;
.....&lt;/P&gt;

&lt;P&gt;Splunk is running and stable&lt;/P&gt;

&lt;P&gt;As you can see the values for data and rss in splunkd.log agree with values from ulimit -a (as root) and /etc/security/limits&lt;BR /&gt;
Data Segment Size: 1073741824 bytes (splunkd.log) = 1048576 KiB (ulimit) = 2097152 blocks (/etc/security/limits)&lt;BR /&gt;
Resident Memory Size 536870912 bytes (splunkd.log) = 524288 KiB (ulimit) = 1048576 blocks (/etc/security/limits)&lt;/P&gt;

&lt;P&gt;HTH&lt;BR /&gt;
Shaky&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 19:03:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165187#M33499</guid>
      <dc:creator>dshakespeare_sp</dc:creator>
      <dc:date>2020-09-28T19:03:07Z</dc:date>
    </item>
    <item>
      <title>Re: Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165188#M33500</link>
      <description>&lt;P&gt;Is there any PowerPC HW sizing guide for running Splunk Enterprise (all roles in distributed search) ??&lt;/P&gt;

&lt;P&gt;Or even with Red Hat on PowerPC ?&lt;/P&gt;</description>
      <pubDate>Sat, 25 Apr 2015 00:41:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165188#M33500</guid>
      <dc:creator>theunf</dc:creator>
      <dc:date>2015-04-25T00:41:55Z</dc:date>
    </item>
    <item>
      <title>Re: Why does Splunk 6.1.2 forwarder on an AIX 7.1 machine keep crashing?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165189#M33501</link>
      <description>&lt;P&gt;Another quick way to update the ulimits is to use the &lt;STRONG&gt;chuser&lt;/STRONG&gt; command.  For example, "chuser fsize=-1 root" would set the max file size to unlimited.  Just remember that using this method would require you to log off the specified user (assuming you are logged in as that user) and log back in.&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2015 20:48:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-does-Splunk-6-1-2-forwarder-on-an-AIX-7-1-machine-keep/m-p/165189#M33501</guid>
      <dc:creator>aafogles</dc:creator>
      <dc:date>2015-07-21T20:48:21Z</dc:date>
    </item>
  </channel>
</rss>

