All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-spl... See more...
Since September 2021, Splunk does not include python 2. so you need to update your code if its not compatible with python 3. https://www.splunk.com/en_us/blog/platform/removing-python-2-from-new-splunk-cloud-and-splunk-enterprise-releases-starting-september-2021.html 
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especi... See more...
@stevenbo I am curious why you need to do this tbh.  You may also find that your current setup will be unsupported after your changes. Always best to get some top cover from Splunk Support, especially if it's going to be a production system. 
Hi, no, I still don't know what the message means!
What is your business problem?
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a parti... See more...
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is: index=<index_name>   we go no result. Perhaps, if I try the same search on Core SH, data are shown. The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both  performing query on ES SH and Core SH. So in a nuthshell we can say: Index that return result on SH Core: N Index tha return result on ES Core: N - 1  
Here the raw:  
Good morning, Thank you for the feedback. Unfortunately the netmask is not fixed... I'll try with the app https://splunkbase.splunk.com/app/6595   
I would appreciate it if there were any documents on Events-per-second (EPS) recorded in a flat file with universal forwarder.
My first hint whenever "something strange" happens seemingly at OS level would be of course to check SELinux.
As a side note - I suppose this is some sort of a typo and your search contains "search action=start", not "search action= start" (notice the space in the middle). Assuming that... That's a bit stra... See more...
As a side note - I suppose this is some sort of a typo and your search contains "search action=start", not "search action= start" (notice the space in the middle). Assuming that... That's a bit strange because assuming all your events follow the same syntax, the search looks relatively sound. The normal approach in debugging searches would be to either start from the beginning and verify whether each step gives you desired results so that after adding each subsequent step you can verify when it stops doing what you want or cutting the commands from the end and see when it starts working properly (for that stage of the pipeline). I'd cut back to just after the rex commands and search for events that should match those results you lack in your final results. Then add one command after another and see. Two possible culprits: 1) default limit of results for timechart (but that's kinda unlikely because you'd get 10 results and "OTHER" by default, not 8 results) 2) case of field names - field names are case sensitive whereas field values are not so if your services field contains "done" in most cases but "DONE" for those missing ones, the whatever:DONE fields would _not_ get matched by the *done wildcard in the table command.
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (... See more...
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (code signature in <8E64DF20-704B-3A23-9512-41A3BCD72DEA> '/opt/splunk/lib/libbz2.1.0.3.dylib' not valid for use in process: library load disallowed by system policy), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache) ERROR: pid 8605 terminated with signal 6
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is ... See more...
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is picky. If you want to share your agent_config.yaml, that may help.  
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. cia... See more...
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. ciao. Giuseppe
Hi @Millowster , this (and many others) is the reason because I don't use Dashboad Studio: not all the functions of Classic Dashboard are still implemented. Ciao. Giuseppe
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] ... See more...
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
@bowesmana , Thank you so much, it worked
Try this on the end of your query | transpose 0 header_field=Tipo_Traffic | eval diff='APP DELIV REPORT'-MT | where diff!=0  
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory b... See more...
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory buffer/queue per-pipeline in addition to the overall maxQueueSize? Finally I'm assuming this would be useful when there is no load balancer in front of the indexers?
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published corre... See more...
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published correctley to ATM subject,Letter published correctley to DMM subject,Letter rejected due to: DOUBLE_KEY,Letter rejected due to: UNVALID_LOG,Letter rejected due to: UNVALID_DATA_APP",",") | mvexpand msgs | rex field=msgs "(Initial message |Letter published correctley to |Letter rejected due to: )(?<reason>.*)" you'll need to decide what you want and what you intend to use it for.
Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.