All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @goncalo , surely the issue is that the default home page is fixed and it isn't possible to define an home page based on the user or the role, the only way is to create another home page common t... See more...
Hi @goncalo , surely the issue is that the default home page is fixed and it isn't possible to define an home page based on the user or the role, the only way is to create another home page common to all the roles. In this way the DASHBOARD1 will be not visible for the not enabled users and you'll not have the error page. In all my apps, I always insert a general Home Page to use as a menu and an introduction to the app. Ciao. Giuseppe
Hi @hazem , what's your isue? connection password are encrypted iby Splunk and it isn't possible to decrypt them. If you loose the encrypted password, they are the ones defined i the database, so ... See more...
Hi @hazem , what's your isue? connection password are encrypted iby Splunk and it isn't possible to decrypt them. If you loose the encrypted password, they are the ones defined i the database, so you can reset them on the DB and change on DB-Connect. So what's your problem? Ciao. Giuseppe
Hi, Thank you for the response I am very sure that we fulfil these requirements. No ingestion takes palace, because there are no Splunk processes running. So to be clear, it is not Splunk that ... See more...
Hi, Thank you for the response I am very sure that we fulfil these requirements. No ingestion takes palace, because there are no Splunk processes running. So to be clear, it is not Splunk that hangs, but the systemctl command to stop Splunkd.service. The Splunk processes has been stopped. But the systemtl command does not comes back in the prompt. I can see in splunkd.log that Splunk has stopped. "ps -ef splunk" : no splunk processes Regards, Harry
It's probably my own paranoia but I try not to overwrite a data field like this in case I have to use the original data field for whatever reason. But functionally this would do what I need, I just d... See more...
It's probably my own paranoia but I try not to overwrite a data field like this in case I have to use the original data field for whatever reason. But functionally this would do what I need, I just didn't know if there was a more Splunk-y way to do it.
You can use rangemap simply | makeresults count=100 | eval severity=random() % 5 + 1 | rangemap field=severity low=1-3 medium=4-4 high=5-5
What's wrong with setting value in the same field?  Given this mock data Severity 1 1 5 4 4 3 3 1 1 2 3 2 2 and this added to your search,   | eval Se... See more...
What's wrong with setting value in the same field?  Given this mock data Severity 1 1 5 4 4 3 3 1 1 2 3 2 2 and this added to your search,   | eval Severity = if(Severity < 4, "lump", Severity)   You will get Severity lump lump 5 4 4 lump lump lump lump lump lump lump lump Is this what you are looking for? (By the way, to pose an answerable question, it is always good to post sample/mock data, desired output, and explain the logic between illustrated data and desired output.) Play with this emulation and compare with real data   | makeresults format=csv data="Severity 1 1 5 4 4 3 3 1 1 2 3 2 2" ``` data emulation above ```  
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
Ok thanks for the answer. That really cleared it up.
Hi  Which is the API Token and URL did you guys use? I try 2 different and did not have success. I'm using Splunk Cloud with the App for SentinelOne (not the TA or IA), is that ok?   Regards
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lump... See more...
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lumped together, but severity 4 and 5 need to be charted separately. The coalesce command is close but in my case the key is the same, it's the value that changes.  None of the mv commands look like they do quite what I need, nor does nomv.   The workaround I've considered doing is an eval command with an if statement to say if the severity is 1, 2, or 3, set a new field value to 3, then chart off of this new field.  It feels janky, but I think it would give me what I want. Is it possible to do what I want in a more elegant manner?
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It i... See more...
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It is considered a search exception if macro expansion does not stop after this many levels. * Value must be greater than or equal to 1. * Default: 100 The word 'recursion' is used in the description of the 'max_macro_depth'  parameter and also in the error you get when you try to use macros recursively as in your example, so whilst one could get into a debate about the use of the word 'recursion' and 'recursive', it's really just about depth, so macro A expands macro B, which expands C and so on.  We use the term nested macros, rather than recursive macros, which as you've discovered is not possible. When you know that macros are expanded before the search and cannot be affected by the data in the events, recursion is in practice impossible. We regularly use nested macros to a number of levels in some of our frameworks as macros lend themselves to creating structure. For example,  you can define `my_macro(type_a)` where 'type_a' is a fixed value and the definition has type as an argument, which then expands to  `nested_macro_$type$` so you can use fixed values in macro calls to reference somewhat dynamic macro trees. Reference to limits.conf here https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Limitsconf#Parsing  
Thanks for closing the loop
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=... See more...
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" (errorCd="*-701" status=FAILED) OR status=SUCCESS | stats min(eval(if(status="FAILED", _time, null()))) as _time values(status) as status count by accountNumber jobNumber letterId errorCd | where status="FAILED" AND mvcount(status)=1 which searches both failed and success events, and then combines them with stats, but retaining _time IFF the event is failed and split by the 4 fields. Without knowing your data, I don't know if letterId and errorCd have a 1:1 correlation with jobNumber, so you'll have to work out if that will work for you. Then the final where condition will only look for events that have ONLY recorded a FAILED status. Subsearches have their uses, but generally using NOT clauses is inefficient and a single search (no subsearches) is often a better approach.
$env.user$ is only available when a search executes, so try something like this <search> <query> | makeresults | eval user=$env:user|s$ </query> <done> <eval token="userid">$resu... See more...
$env.user$ is only available when a search executes, so try something like this <search> <query> | makeresults | eval user=$env:user|s$ </query> <done> <eval token="userid">$result.user$</eval> </done> </search>
Try something like this | eventstats latest(status) as latest_status by jobNumber | where latest_status="FAILED"
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running pro... See more...
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running properly on a Ubuntu Server 24.04, using MicroK8s and kubectl. I followed this guide in the documentation: https://docs.appdynamics.com/appd/24.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli Everything was clear for the seven steps, but I have to point it out that in the step 6, the YML example is showing the port 8080 for the controllerUrl, and since I have the SaaS version, I changed it to 443. When I validate the installation noticed that the cluster agent was not registered, so I started following the troubleshooting docs, when I retrieve the logs for the namespace with the operator and the cluster-agent I checked the following error: [ERROR]: 2024-07-03 15:52:57 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory I don't know why is searching for that specific path and in which moment should I create or where to find that api-user, that was not in the docs. So I'll be really thankful if someone could help me with this issue. Hope everybody have a nice day. Regards
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to... See more...
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to an alert that i can run every X hours , looking back X hours, and finds me ONLY the failed logs that havent succeeded yet . The failures are retried at regular intervals. In short, get all failed events and weed out the succeeded ones on later retries , so i only have the ones that is still failed.  Trying something like below but i think there should be better way than this..  index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" errorCd="*-701" status=FAILED |where jobNumber NOT IN [search index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" status=SUCCESS ] |table _time accountNumber jobNumber letterId errorCd |sort _time TIA!!
hello all, Is there any method to decrypt the identity password in the Splunk DB_connect App? we are using Splunk DB Connect version 3.11.1      
this is literally saved my day! thanks for summary!!
Any chance that there will be a Splunk integration for TRAP?