Hi @tscroggins may i know how this works please. (?<char>(?=\\S)\\X) the \\X, google told me, it will match for a unicode character. nice. but may i know if you are aware of this from Perl or P...
See more...
Hi @tscroggins may i know how this works please. (?<char>(?=\\S)\\X) the \\X, google told me, it will match for a unicode character. nice. but may i know if you are aware of this from Perl or Python rex or somewhere else and may i know what the "?=\\S" does please.
Hello @WumboJumbo675 , If there are no errors in the connection between UFs > HFs > Splunk Cloud, there are no syntax errors, the index is created and the precedente of the inputs is correct I belie...
See more...
Hello @WumboJumbo675 , If there are no errors in the connection between UFs > HFs > Splunk Cloud, there are no syntax errors, the index is created and the precedente of the inputs is correct I believe that a reboot is a good option. Thanks.
Hello @kenbaugher , The ulimit -d configuration will only present the current ulimit configuration. Please use the following Splunk doc to apply the ulimits configuration: https://docs.splunk.com...
See more...
Hello @kenbaugher , The ulimit -d configuration will only present the current ulimit configuration. Please use the following Splunk doc to apply the ulimits configuration: https://docs.splunk.com/Documentation/Splunk/9.2.0/Troubleshooting/ulimitErrors#Set_limits_using_.2Fetc.2Fsecurity.2Flimits.conf Thanks.
We just installed the forwarder on one of our VIOS systems to ensure we could get this working, however each time we try to start it up we see the below in our splunkd.log 02-09-2024 13:28:54.797...
See more...
We just installed the forwarder on one of our VIOS systems to ensure we could get this working, however each time we try to start it up we see the below in our splunkd.log 02-09-2024 13:28:54.797 -0600 WARN ulimit [80544161 MainThread] - A system resource limit on this machine is below the minimum recommended value: system_resource = Data segment size (ulimit -d); current_limit = 134217728; recommended_minimum_value = 536870912. Change the operating system resource limits to meet the minimum recommended values for Splunk Enterprise. 02-09-2024 13:28:54.797 -0600 INFO ulimit [80544161 MainThread] - Limit: data file size: unlimited 02-09-2024 13:28:55.258 -0600 WARN Thread [86376799 HTTPDispatch] - HTTPDispatch: about to throw a ThreadException: pthread_create: Not enough space; 43 threads active. Trying to create batchreader0 We issued the ulimit -d command to update this to unlimited, however still seeing the issue.
Thanks for the response! When I search logs for the heavy forwarder, I see the below TCP error message - WARN AutoLoadBalancedConnectionStrategy [7892 TcpOutEloop] - Current dest host connection 44...
See more...
Thanks for the response! When I search logs for the heavy forwarder, I see the below TCP error message - WARN AutoLoadBalancedConnectionStrategy [7892 TcpOutEloop] - Current dest host connection 44.218.224.52:9997 There are no connection errors within the slunkd.logs on the DCs Confirmed no syntax errors and the inputs lists output is correct Confirmed there is an index for 'wineventlog' as that is where the application/system/DNS logs are flowing. Now that I think about it, I made the permission changes for the Splunk service account to be able to access logs on the DCs, but never rebooted them. I am wondering if a reboot is required to apply the changes... To bad I cannot reboot any of the DCs until there scheduled reboot date.
The answer is depending on your SC environment, how much data, which kind of it is etc. I think that you should found some local company / person which can help you.
Hi You need to remember that changes in HF will effect only to new events and it also need restart of HF before those take effect. Is this HF 1st full splunk instance on path to SC? Have you try to ...
See more...
Hi You need to remember that changes in HF will effect only to new events and it also need restart of HF before those take effect. Is this HF 1st full splunk instance on path to SC? Have you try to set KV_MODE to none on SC to check if it helps with those old events. r. Ismo
Thanks! strftime() function worked for the purpose. Here's the Search. [search criteria]
| eval mytime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q")
| table _time, _raw, mytime
Hi you could look my old post https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865. You need to do small modifications to it. Select corr...
See more...
Hi you could look my old post https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865. You need to do small modifications to it. Select correct format into 1st replace part to get year to hour part from source. Replace tostring part with take your minutes to sub second from _raw e.g. substring/replace modify format string to match your combined year to sub second format You could test this like I have done on above post. If needed, don’t hesitate to ask more help. Remember that INGEST_EVAL must be an one command only. r. Ismo
Hi I don’t think that there are any official instructions for this. I believe that you could easily setup a distributed splunk environment in on prem and switch your log collections towards it. Al...
See more...
Hi I don’t think that there are any official instructions for this. I believe that you could easily setup a distributed splunk environment in on prem and switch your log collections towards it. Also if you have created your KOs into git or other version control system then just install those into new environment. To getting old data from SC will be tricky. Probably it’s best situation if you could leave it there and set e.g. federated search to use it? If not then there is no official way to get those buckets/data back into on prem. r. Ismo
@ITWhisperer for this 3) edit your dashboard panel and change the x-axis title to none I have found the solution to this issue. I applied the following SPL code to the existing SPL, and the visual...
See more...
@ITWhisperer for this 3) edit your dashboard panel and change the x-axis title to none I have found the solution to this issue. I applied the following SPL code to the existing SPL, and the visualization updated automatically to reflect the changes. |timechart span=1d count |eval Date=strftime(_time, "%m/%d") |table Date, count @ITWhisperer thanks for your time...
Can you let us know which log you see? Following three logs "Unexpected event id" ( 9.1.2 still logs) "Invalid ACK received from indexer" ( 9.1.2 should not log) "Got unexpected ACK with ev...
See more...
Can you let us know which log you see? Following three logs "Unexpected event id" ( 9.1.2 still logs) "Invalid ACK received from indexer" ( 9.1.2 should not log) "Got unexpected ACK with eventid" (9.1.2 should not log)
Hi Even splunk can receive syslog feed you shouldn’t use it for that. With splunk you will lost more those events than using some real syslog server. On production use always HA syslog server instead...
See more...
Hi Even splunk can receive syslog feed you shouldn’t use it for that. With splunk you will lost more those events than using some real syslog server. On production use always HA syslog server instead of HF with syslog receiver. r. Ismo
Hi @ITWhisperer @PickleRick yes, the sample event I gave you earlier, it was output of that event only whole command works as main search but if I put same thing as subsearch, it doesn't work