All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would appreciate it if there were any documents on Events-per-second (EPS) recorded in a flat file with universal forwarder.
My first hint whenever "something strange" happens seemingly at OS level would be of course to check SELinux.
As a side note - I suppose this is some sort of a typo and your search contains "search action=start", not "search action= start" (notice the space in the middle). Assuming that... That's a bit stra... See more...
As a side note - I suppose this is some sort of a typo and your search contains "search action=start", not "search action= start" (notice the space in the middle). Assuming that... That's a bit strange because assuming all your events follow the same syntax, the search looks relatively sound. The normal approach in debugging searches would be to either start from the beginning and verify whether each step gives you desired results so that after adding each subsequent step you can verify when it stops doing what you want or cutting the commands from the end and see when it starts working properly (for that stage of the pipeline). I'd cut back to just after the rex commands and search for events that should match those results you lack in your final results. Then add one command after another and see. Two possible culprits: 1) default limit of results for timechart (but that's kinda unlikely because you'd get 10 results and "OTHER" by default, not 8 results) 2) case of field names - field names are case sensitive whereas field values are not so if your services field contains "done" in most cases but "DONE" for those missing ones, the whatever:DONE fields would _not_ get matched by the *done wildcard in the table command.
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (... See more...
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (code signature in <8E64DF20-704B-3A23-9512-41A3BCD72DEA> '/opt/splunk/lib/libbz2.1.0.3.dylib' not valid for use in process: library load disallowed by system policy), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache) ERROR: pid 8605 terminated with signal 6
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is ... See more...
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is picky. If you want to share your agent_config.yaml, that may help.  
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. cia... See more...
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. ciao. Giuseppe
Hi @Millowster , this (and many others) is the reason because I don't use Dashboad Studio: not all the functions of Classic Dashboard are still implemented. Ciao. Giuseppe
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] ... See more...
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
@bowesmana , Thank you so much, it worked
Try this on the end of your query | transpose 0 header_field=Tipo_Traffic | eval diff='APP DELIV REPORT'-MT | where diff!=0  
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory b... See more...
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory buffer/queue per-pipeline in addition to the overall maxQueueSize? Finally I'm assuming this would be useful when there is no load balancer in front of the indexers?
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published corre... See more...
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published correctley to ATM subject,Letter published correctley to DMM subject,Letter rejected due to: DOUBLE_KEY,Letter rejected due to: UNVALID_LOG,Letter rejected due to: UNVALID_DATA_APP",",") | mvexpand msgs | rex field=msgs "(Initial message |Letter published correctley to |Letter rejected due to: )(?<reason>.*)" you'll need to decide what you want and what you intend to use it for.
Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.  
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message receiv... See more...
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message received with below details" OR "Letter published correctley to ATM subject" OR Letter published correctley to DMM subject" OR "Letter rejected due to: DOUBLE_KEY" OR "Letter rejected due to: UNVALID_LOG" OR "Letter rejected due to: UNVALID_DATA_APP")  
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a time... See more...
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a timestamp (128 bytes by default). If it cannot find a timestamp, then it will use current time https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf#Timestamp_extraction_configuration
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>... See more...
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex "Initial message (?<type>\w+)" | chart count over RampdataSet by type | addtotals This extracts a 'type' field which will be received, Error or Successfull and then the chart command will do what you want - it will give you fields names as above, but you can rename those to what you want.
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval ca... See more...
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval category=case(match(host, "t"), "Test", match(host, "q"), "QA", match(host, "p"), "Prod", true(), "Unknown") change the match statement regex as needed and the category you want to show. category will be the <fieldForLabel> and then you need to make the <fieldForValue> to contain the value element you want for the token.
No difference - same speed - what's your macro doing?
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull... See more...
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull(IPv6), "IPv6", 1==1, "")
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!