All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.  
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message receiv... See more...
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message received with below details" OR "Letter published correctley to ATM subject" OR Letter published correctley to DMM subject" OR "Letter rejected due to: DOUBLE_KEY" OR "Letter rejected due to: UNVALID_LOG" OR "Letter rejected due to: UNVALID_DATA_APP")  
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a time... See more...
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a timestamp (128 bytes by default). If it cannot find a timestamp, then it will use current time https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf#Timestamp_extraction_configuration
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>... See more...
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex "Initial message (?<type>\w+)" | chart count over RampdataSet by type | addtotals This extracts a 'type' field which will be received, Error or Successfull and then the chart command will do what you want - it will give you fields names as above, but you can rename those to what you want.
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval ca... See more...
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval category=case(match(host, "t"), "Test", match(host, "q"), "QA", match(host, "p"), "Prod", true(), "Unknown") change the match statement regex as needed and the category you want to show. category will be the <fieldForLabel> and then you need to make the <fieldForValue> to contain the value element you want for the token.
No difference - same speed - what's your macro doing?
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull... See more...
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull(IPv6), "IPv6", 1==1, "")
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet ou... See more...
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet output: RampdataSet IntialMessage WAC 10 WAX 30 WAM 22 STC 33 STX 66 OTP 20   Query2: index=app-index source=application.logs "Initial message Successfull" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as SuccessfullMessage by RampdataSet output: RampdataSet SuccessfullMessage WAC 0 WAX 15 WAM 20 STC 12 STX 30 OTP 10 TTC 5 TAN 7 TXN 10 WOU 12   Query3: index=app-index source=application.logs "Initial message Error" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as ErrorMessage by RampdataSet output: RampdataSet ErrorMessage WAC 0 WAX 15 WAM 20 STC 12   We want to combine three queries and want to get the output as shown below, how to do that??? RampdataSet IntialMessage SuccessfullMessage ErrorMessage Total WAC 10 0 0 10 WAX 30 15 15 60 WAM 22 20 20 62 STC 33 12 12 57 STX 66 30 0 96 OTP 20 10 0 30 TTC 0 5 0 5 TAN 0 7 0 7 TXN 0 10 0 10 WOU 0 12 0 12  
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. no... See more...
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. please find the output. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
Hi Tony, Based on the first screen capture, the javaagent node is not reporting to the controller (it's showing 0% for status) and this is the reason it's not showing up in the dashboard. We will ne... See more...
Hi Tony, Based on the first screen capture, the javaagent node is not reporting to the controller (it's showing 0% for status) and this is the reason it's not showing up in the dashboard. We will need to take a look at the logs to understand the reason why the agent is unable to establish connection with the controller. Could you capture the logs for the node that is not reporting and attach them here (docs on capturing logs - https://docs.appdynamics.com/appd/24.x/24.3/en/application-monitoring/install-app-server-agents/java-agent/administer-the-java-agent/java-agent-logging). After initial analysis we will let you know if we need to collect any additional information. Thanks
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system a... See more...
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system and let me know if you're seeing this to. That will help me determine if this is a problem in my environment or a possible bug in the software. To test, find any macro in your environment. Establish baseline: Enter just the macro name in the search box and press ctrl+shift+e (or command+shift+e, I think, on MAC).  Note the length of time it takes for the modal pop up to show you the expanded macro. It is not necessary to run the search. `mymacro` Test issue: Using the same macro as above, create a simple search that has the macro inside of a sub-search. Try expanding the macro. Are you getting a slow response? For me, it's >20 seconds for it to expand the macro  |makeresults |append [`mymacro`] I appreciate the help from anyone willing to test. 
Hi @Dean.Marchetti, No worries, just let us know how it goes when you get around to it.
Are you sure that your raw event is not a valid JSON closer to   {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-54546... See more...
Are you sure that your raw event is not a valid JSON closer to   {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   instead?  In other words, do you not have a field  named "DATA" already? Because the overall structure of your illustration is very much compliant. Assuming you have a field named DATA, a better strategy is trying to reconstruct a structure as your developers intended, instead of trying to extract individual tidbits as random text because your developers have clearly put in thoughts about data structure within DATA.  I would propose something like   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename _raw as temp | rename DATA AS _raw | kv | rename temp as _raw   Your sample data should give you ACTION Applicationid DIP Data FLOW MIM REQ SERVICE date http tags.host tags.insuranceid tags.lib START iis-675456 675478-7655a-56778d-655de45565 7665-56767ed-5454656 NEW 483748348-632637f-38648266257d GET data published/data/ui AAP 1/2/2022 00:12:22,124 nio-12567-exec-44 GTU5656 8786578896667 app Here is an emulation that results in my hypothesized raw log:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```   Play with the emulation and compare with real data. Note: In the unimaginable case where your developers try really hard to mess up everybody's mind and inject semblance of JSON compliance while violating common sense, you can still apply the same principle against _raw.  Like this:   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ``` | rex mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | kv   This is what the output would look like: ACTION Applicationid DATA DIP Data FLOW MIM REQ SERVICE host START iis-675456 http= 675478-7655a-56778d-655de45565 7665-56767ed-5454656 NEW 483748348-632637f-38648266257d GET data published/data/ui AAP   Without a better structure, you won't get subnodes embedded in tags; but your original question does not seem to care about tags. Here is an emulation that resembles the actual sample you posted:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", DATA: [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```  
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable ... See more...
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable filesystem" I'm running RHEL 8 VM's, running Splunk 9.1, 2 indexers clustered  together and a cluster manager. I've attached external drives for hot and cold to each indexer. The external drives have been formatted in ext4 and set in fdisk to mount at boot every time as /mnt/splunk_hot and /mnt/splunk_cold and pointed indexes.conf by volume to them. They come up at boot, I can navigate to them and write to them. They're currently owned by root. I couldn't find who should have permission over them so I left them as is to start. I tried to enable OPTIMISTIC_ABOUT_FILE_LOCKING=1  but that didn't do anything. That being said, i suspect I've missed a step in the actions taken mounting the external drives.  I wasn't able to find specifics about the way I'm doing this, so I pose the question:  Am I doing something wrong, or missing a step on mounting these external drives? Is that now a bad practice?  I'm stumped. my indexes.conf: [volume:hot] path=/mnt/splunk_hot [volume:cold] path=/mnt/splunk_cold [abc] repFactor = auto homePath = volume:hot/abc/db coldPath = volume:cold/abc/db thawedPath = $SPLUNK_DB/abc/thaweddb ##We're not utilizing frozen storage at all so I left it default Any advice here would be greatly appreciated!
Hi All, I'm sorry for not replying sooner. I have been out of the office and did not have a chance to reply.   The reply from Terence is not what we were looking for, but it may be an answer to the ... See more...
Hi All, I'm sorry for not replying sooner. I have been out of the office and did not have a chance to reply.   The reply from Terence is not what we were looking for, but it may be an answer to the issue.   We plan to test the answer from Terence over the next week or so.  Please stay tuned. 
Hi. Were you able to overcome this issue?  
Were you able to find an answer to your question? I know there are lots of SAP customers who face the same problem.  Take a look at PowerConnect. It works out of the box and pulls in SAP CPI logs and... See more...
Were you able to find an answer to your question? I know there are lots of SAP customers who face the same problem.  Take a look at PowerConnect. It works out of the box and pulls in SAP CPI logs and many other SAP SaaS offerings. It also helps cut down on meat time to detect / resolve other performance and security issues.
Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SER... See more...
Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success, "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}} Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: DONE | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success, "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   Hi @PickleRick , added sample logs, let me know if u need any other details.
We don't know your data, we don't know what you're getting, we don't know if you match your data properly or extract the fields properly. We don't know anything except a search and some excel table.