All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull... See more...
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull(IPv6), "IPv6", 1==1, "")
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet ou... See more...
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet output: RampdataSet IntialMessage WAC 10 WAX 30 WAM 22 STC 33 STX 66 OTP 20   Query2: index=app-index source=application.logs "Initial message Successfull" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as SuccessfullMessage by RampdataSet output: RampdataSet SuccessfullMessage WAC 0 WAX 15 WAM 20 STC 12 STX 30 OTP 10 TTC 5 TAN 7 TXN 10 WOU 12   Query3: index=app-index source=application.logs "Initial message Error" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as ErrorMessage by RampdataSet output: RampdataSet ErrorMessage WAC 0 WAX 15 WAM 20 STC 12   We want to combine three queries and want to get the output as shown below, how to do that??? RampdataSet IntialMessage SuccessfullMessage ErrorMessage Total WAC 10 0 0 10 WAX 30 15 15 60 WAM 22 20 20 62 STC 33 12 12 57 STX 66 30 0 96 OTP 20 10 0 30 TTC 0 5 0 5 TAN 0 7 0 7 TXN 0 10 0 10 WOU 0 12 0 12  
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. no... See more...
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. please find the output. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
Hi Tony, Based on the first screen capture, the javaagent node is not reporting to the controller (it's showing 0% for status) and this is the reason it's not showing up in the dashboard. We will ne... See more...
Hi Tony, Based on the first screen capture, the javaagent node is not reporting to the controller (it's showing 0% for status) and this is the reason it's not showing up in the dashboard. We will need to take a look at the logs to understand the reason why the agent is unable to establish connection with the controller. Could you capture the logs for the node that is not reporting and attach them here (docs on capturing logs - https://docs.appdynamics.com/appd/24.x/24.3/en/application-monitoring/install-app-server-agents/java-agent/administer-the-java-agent/java-agent-logging). After initial analysis we will let you know if we need to collect any additional information. Thanks
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system a... See more...
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system and let me know if you're seeing this to. That will help me determine if this is a problem in my environment or a possible bug in the software. To test, find any macro in your environment. Establish baseline: Enter just the macro name in the search box and press ctrl+shift+e (or command+shift+e, I think, on MAC).  Note the length of time it takes for the modal pop up to show you the expanded macro. It is not necessary to run the search. `mymacro` Test issue: Using the same macro as above, create a simple search that has the macro inside of a sub-search. Try expanding the macro. Are you getting a slow response? For me, it's >20 seconds for it to expand the macro  |makeresults |append [`mymacro`] I appreciate the help from anyone willing to test. 
Hi @Dean.Marchetti, No worries, just let us know how it goes when you get around to it.
Are you sure that your raw event is not a valid JSON closer to   {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-54546... See more...
Are you sure that your raw event is not a valid JSON closer to   {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   instead?  In other words, do you not have a field  named "DATA" already? Because the overall structure of your illustration is very much compliant. Assuming you have a field named DATA, a better strategy is trying to reconstruct a structure as your developers intended, instead of trying to extract individual tidbits as random text because your developers have clearly put in thoughts about data structure within DATA.  I would propose something like   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename _raw as temp | rename DATA AS _raw | kv | rename temp as _raw   Your sample data should give you ACTION Applicationid DIP Data FLOW MIM REQ SERVICE date http tags.host tags.insuranceid tags.lib START iis-675456 675478-7655a-56778d-655de45565 7665-56767ed-5454656 NEW 483748348-632637f-38648266257d GET data published/data/ui AAP 1/2/2022 00:12:22,124 nio-12567-exec-44 GTU5656 8786578896667 app Here is an emulation that results in my hypothesized raw log:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```   Play with the emulation and compare with real data. Note: In the unimaginable case where your developers try really hard to mess up everybody's mind and inject semblance of JSON compliance while violating common sense, you can still apply the same principle against _raw.  Like this:   index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ``` | rex mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | kv   This is what the output would look like: ACTION Applicationid DATA DIP Data FLOW MIM REQ SERVICE host START iis-675456 http= 675478-7655a-56778d-655de45565 7665-56767ed-5454656 NEW 483748348-632637f-38648266257d GET data published/data/ui AAP   Without a better structure, you won't get subnodes embedded in tags; but your original question does not seem to care about tags. Here is an emulation that resembles the actual sample you posted:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", DATA: [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ```  
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable ... See more...
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable filesystem" I'm running RHEL 8 VM's, running Splunk 9.1, 2 indexers clustered  together and a cluster manager. I've attached external drives for hot and cold to each indexer. The external drives have been formatted in ext4 and set in fdisk to mount at boot every time as /mnt/splunk_hot and /mnt/splunk_cold and pointed indexes.conf by volume to them. They come up at boot, I can navigate to them and write to them. They're currently owned by root. I couldn't find who should have permission over them so I left them as is to start. I tried to enable OPTIMISTIC_ABOUT_FILE_LOCKING=1  but that didn't do anything. That being said, i suspect I've missed a step in the actions taken mounting the external drives.  I wasn't able to find specifics about the way I'm doing this, so I pose the question:  Am I doing something wrong, or missing a step on mounting these external drives? Is that now a bad practice?  I'm stumped. my indexes.conf: [volume:hot] path=/mnt/splunk_hot [volume:cold] path=/mnt/splunk_cold [abc] repFactor = auto homePath = volume:hot/abc/db coldPath = volume:cold/abc/db thawedPath = $SPLUNK_DB/abc/thaweddb ##We're not utilizing frozen storage at all so I left it default Any advice here would be greatly appreciated!
Hi All, I'm sorry for not replying sooner. I have been out of the office and did not have a chance to reply.   The reply from Terence is not what we were looking for, but it may be an answer to the ... See more...
Hi All, I'm sorry for not replying sooner. I have been out of the office and did not have a chance to reply.   The reply from Terence is not what we were looking for, but it may be an answer to the issue.   We plan to test the answer from Terence over the next week or so.  Please stay tuned. 
Hi. Were you able to overcome this issue?  
Were you able to find an answer to your question? I know there are lots of SAP customers who face the same problem.  Take a look at PowerConnect. It works out of the box and pulls in SAP CPI logs and... See more...
Were you able to find an answer to your question? I know there are lots of SAP customers who face the same problem.  Take a look at PowerConnect. It works out of the box and pulls in SAP CPI logs and many other SAP SaaS offerings. It also helps cut down on meat time to detect / resolve other performance and security issues.
Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SER... See more...
Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success, "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}} Sample logs: {"date": "1/2/2022 00:12:22,124" DATA [http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: DONE | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success, "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   Hi @PickleRick , added sample logs, let me know if u need any other details.
We don't know your data, we don't know what you're getting, we don't know if you match your data properly or extract the fields properly. We don't know anything except a search and some excel table.
It's a bit more complicated than that. 1. As @richgalloway pointed out, UF is by default capped with maxKBps (which is a rough value - there is no guarantee for Splunk to _always_ process no more th... See more...
It's a bit more complicated than that. 1. As @richgalloway pointed out, UF is by default capped with maxKBps (which is a rough value - there is no guarantee for Splunk to _always_ process no more than that value per second). 2. Even if you set the limit to 0 (no limit at all), the back pressure from output will make the forwarder to stop reading the file until the queue empties a bit. Generally, the "speed" of Splunk reading files depends mostly on non-Splunk limits (like output rate which might be limited by receiving instance performance or network bandwidth or input rate if the file is placed on a network share). Also since the limits apply to the general overall size of the data regardless of how big the events are, the EPS value isn't that important here - the same limit will apply if you send just a few big events as when you send many small ones. But there is also one more thing worth pointing out - UF doesn't (typically, unless you use indexed extractions on structured data) deal with events as such - it reads and sends to an output chunks of data for breaking into events "further down the road" (on indexers or heavy forwarders). With sufficiently modern UF and configured EVENT_BREAKER, you should be sending chunks of data ending on event boundary, but you typically don't send single events (unless they are huge).  
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actio... See more...
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done   When we run the above query , not all services getting captured,  but we have data, attached the screen shot(highlighted ones are missing). can anyone let me know what is the issue with the query.
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in t... See more...
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in the drop down list  from the list of host returned in the query itself?(using wildcard if host has t then test /if host has q -qa server, etc? but for now i am using static options  test - testhost qa - qahost prod - prodhost
It depends on the size of an event.  The UF is rate-limited by the maxKBps setting in limits.conf.
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/... See more...
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml  below is the pkg version appdynamics==23.8.0.6197 appdynamics-bindeps is not getting pulled/installed  tried with latest version of appdynamics pkg .. same experience.. appdynamics==24.2.0.6567 this is happening only on mac m1 pro. adding explicitly "appdynamics-bindeps-linux-x64==23.8.0" in requirements.txt is giving below error. File "/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py", line 6, in <module> 2024-04-10 11:14:44 from . import (constants, error, message, context, 2024-04-10 11:14:44 ImportError: cannot import name 'constants' from partially initialized module 'appdynamics_bindeps.zmq.backend.cython' (most likely due to a circular import) (/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py)
One way around this is to use a small (﹠) or fullwidth (&) ampersand.