All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  Which is the API Token and URL did you guys use? I try 2 different and did not have success. I'm using Splunk Cloud with the App for SentinelOne (not the TA or IA), is that ok?   Regards
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lump... See more...
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lumped together, but severity 4 and 5 need to be charted separately. The coalesce command is close but in my case the key is the same, it's the value that changes.  None of the mv commands look like they do quite what I need, nor does nomv.   The workaround I've considered doing is an eval command with an if statement to say if the severity is 1, 2, or 3, set a new field value to 3, then chart off of this new field.  It feels janky, but I think it would give me what I want. Is it possible to do what I want in a more elegant manner?
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It i... See more...
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It is considered a search exception if macro expansion does not stop after this many levels. * Value must be greater than or equal to 1. * Default: 100 The word 'recursion' is used in the description of the 'max_macro_depth'  parameter and also in the error you get when you try to use macros recursively as in your example, so whilst one could get into a debate about the use of the word 'recursion' and 'recursive', it's really just about depth, so macro A expands macro B, which expands C and so on.  We use the term nested macros, rather than recursive macros, which as you've discovered is not possible. When you know that macros are expanded before the search and cannot be affected by the data in the events, recursion is in practice impossible. We regularly use nested macros to a number of levels in some of our frameworks as macros lend themselves to creating structure. For example,  you can define `my_macro(type_a)` where 'type_a' is a fixed value and the definition has type as an argument, which then expands to  `nested_macro_$type$` so you can use fixed values in macro calls to reference somewhat dynamic macro trees. Reference to limits.conf here https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Limitsconf#Parsing  
Thanks for closing the loop
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=... See more...
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" (errorCd="*-701" status=FAILED) OR status=SUCCESS | stats min(eval(if(status="FAILED", _time, null()))) as _time values(status) as status count by accountNumber jobNumber letterId errorCd | where status="FAILED" AND mvcount(status)=1 which searches both failed and success events, and then combines them with stats, but retaining _time IFF the event is failed and split by the 4 fields. Without knowing your data, I don't know if letterId and errorCd have a 1:1 correlation with jobNumber, so you'll have to work out if that will work for you. Then the final where condition will only look for events that have ONLY recorded a FAILED status. Subsearches have their uses, but generally using NOT clauses is inefficient and a single search (no subsearches) is often a better approach.
$env.user$ is only available when a search executes, so try something like this <search> <query> | makeresults | eval user=$env:user|s$ </query> <done> <eval token="userid">$resu... See more...
$env.user$ is only available when a search executes, so try something like this <search> <query> | makeresults | eval user=$env:user|s$ </query> <done> <eval token="userid">$result.user$</eval> </done> </search>
Try something like this | eventstats latest(status) as latest_status by jobNumber | where latest_status="FAILED"
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running pro... See more...
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running properly on a Ubuntu Server 24.04, using MicroK8s and kubectl. I followed this guide in the documentation: https://docs.appdynamics.com/appd/24.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli Everything was clear for the seven steps, but I have to point it out that in the step 6, the YML example is showing the port 8080 for the controllerUrl, and since I have the SaaS version, I changed it to 443. When I validate the installation noticed that the cluster agent was not registered, so I started following the troubleshooting docs, when I retrieve the logs for the namespace with the operator and the cluster-agent I checked the following error: [ERROR]: 2024-07-03 15:52:57 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory I don't know why is searching for that specific path and in which moment should I create or where to find that api-user, that was not in the docs. So I'll be really thankful if someone could help me with this issue. Hope everybody have a nice day. Regards
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to... See more...
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to an alert that i can run every X hours , looking back X hours, and finds me ONLY the failed logs that havent succeeded yet . The failures are retried at regular intervals. In short, get all failed events and weed out the succeeded ones on later retries , so i only have the ones that is still failed.  Trying something like below but i think there should be better way than this..  index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" errorCd="*-701" status=FAILED |where jobNumber NOT IN [search index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" status=SUCCESS ] |table _time accountNumber jobNumber letterId errorCd |sort _time TIA!!
hello all, Is there any method to decrypt the identity password in the Splunk DB_connect App? we are using Splunk DB Connect version 3.11.1      
this is literally saved my day! thanks for summary!!
Any chance that there will be a Splunk integration for TRAP?
I believe the tokens must be defined on the MC.  You should be able to do that by copying inputs.conf from a Search Head to the MC and restarting the MC.
Wow!  I've encountered the same.  Thanks for posting.
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when t... See more...
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when the dashboard loads and only displays results when the viewer clicks on a username from panel B. I am attempting to initialize the dashboard with a set of default results based on the currently logged in user. If I pull up the dashboard I would like the results in panel A to default to my username. The problem I am running into is that when I set the initial value it accepts it as plain text instead of the token value.   <init> <set token="usr.name">$env:user$</set> </init> <row> <html> $usr.name$ </html> </row>   This code displays the $usr.name$ token as "$env:user$" instead of my username. My eventual goal is to have panel A display results for myself when I first access the dashboard and for another user if I click on their name in the results of panel B. I have been coming up empty on google so am reaching out here to see if anyone has any ideas.
Hi, please can you help me as well? Hi @Dallastek1  Which app did you install in Splunk Cloud for the integration? Did you use a HF as well?   I tried to configure more than one "API Key" and UR... See more...
Hi, please can you help me as well? Hi @Dallastek1  Which app did you install in Splunk Cloud for the integration? Did you use a HF as well?   I tried to configure more than one "API Key" and URL but just don't succed.   Can you explain the steps you take? Regards.
uncheck this.
So then it seems like the answer is: No, it is not possible to create a recursive macro. Then I don't understand why there is a max recursion limit. That limit seems useless since it's not actually ... See more...
So then it seems like the answer is: No, it is not possible to create a recursive macro. Then I don't understand why there is a max recursion limit. That limit seems useless since it's not actually possible to use recursion. It should just return an error the moment recursion is detected.
So I tried couple of search strings and I am able to see my new hosts index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp and  index=_internal source=*metrics.log* tcp... See more...
So I tried couple of search strings and I am able to see my new hosts index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp and  index=_internal source=*metrics.log* tcpin_connections | stats count by hostname   Also, tried this long search sting (see below) and all the hosts are showing up.  But when I try index=<index-name>, don't see them.  index="_internal" sourcetype="splunkd" source="*metrics.lo*" group=tcpin_connections component=Metrics | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","legacy forwarder") | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | fields connectionType sourceIp sourceHost splunk_server version os arch kb guid ssl tcp_KBps | eval lastReceived = case(kb>0, _time) | eval lastConnected=max(_time) | stats first(sourceIp) as sourceIp first(connectionType) as connectionType max(version) as version first(os) as os first(arch) as arch max(lastConnected) as lastConnected max(lastReceived) as lastReceived sparkline(avg(tcp_KBps)) as "KB/s" avg(tcp_KBps) as "Avg_KB/s" by sourceHost guid ssl | addinfo | eval status=if(lastConnected<(info_max_time-900),"missing",if(mystatus="quiet","quiet","active")) | fields sourceHost sourceIp version connectionType os arch lastConnected lastReceived KB/s Avg_KB/s status ssl | rename sourceHost as Forwarder version as "Splunk Version" connectionType as "Forwarder Type" os as "Platform" status as "Current Status" lastConnected as "Last Connected" lastReceived as "Last Data Received" | fieldformat "Last Connected"=strftime('Last Connected', "%D %H:%M:%S %p") | fieldformat "Last Data Received"=strftime('Last Data Received', "%D %H:%M:%S %p") | sort Forwarder
@krutika_ag  Maybe I don't entirely understand your scenario. Is there only one syslog server, or multiple ones? The syslog server, if it is properly configured does not just create duplicate entr... See more...
@krutika_ag  Maybe I don't entirely understand your scenario. Is there only one syslog server, or multiple ones? The syslog server, if it is properly configured does not just create duplicate entries. Check your syslog configuration both on the server and the sending nodes. As far as ensuring that the ingestion is unique, add a CRC salt and/or ensure there is a stanza in your inputs.conf that is ignoring older files. There is a relevant discussion here: How to avoid reindexing files after setting crcSal... - Splunk Community inputs.conf - Splunk Documentation