All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 ... See more...
If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 Feb-29 Mar-01 Mar-02 Mar-03 Mar-04 Mar-05 Mar-06 Mar-07 Mar-08 Mar-09 Mar-10 Mar-11 Mar-12 17.68 gb 18.53 gb 19.66 gb 21.09 gb 22.04 gb 23.21 gb 24.20 gb 24.94 gb 25.64 gb 26.80 gb 27.79 gb 29.07 gb 30.09 gb 31.01 gb
Are you trying to reduce the number of joins in the query (a good goal) or use this query in multiple dashboard panels (or maybe both)?
Hi! thank you for responding soon! I appreciate that. I am trying to put commas formatting to my  Totals Row that I builds with this : | appendpipe [stats sum(*) as * by Number | eval UserName="Tota... See more...
Hi! thank you for responding soon! I appreciate that. I am trying to put commas formatting to my  Totals Row that I builds with this : | appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] This is my row now: -------------------------------------------------------------------------------------------- Total By Number:        1905           2229         1303         1845          1409 --------------------------------------------------------------------------------------------   This is the row that I am looking for the formatting way : -------------------------------------------------------------------------------------------- Total By Number:        1,905           2,229         13,03         1,845          1,409 --------------------------------------------------------------------------------------------   Thank you  
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".... See more...
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".   Note: the events also has a lot of attributes apart from change_set and source.   index=test | stats min(change_set) by source   (Now delete the events which has that source and change_set. ```   How can I write the delete operation with this query? (the most optimal way) @gcusello @ITWhisperer @scelikok @PickleRick    Thanks
Try something like this index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") OR (cwmessage.message = "*Notification(COMPLETED)*") OR (c... See more...
Try something like this index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") OR (cwmessage.message = "*Notification(COMPLETED)*") OR (cwmessage.message = "*Notification(UPDATED)*") | stats latest(eval(if(match('cwmessage.message',".*Notification\(REQUESTED\).*"),_time,null()))) as start_time latest(eval(if(match('cwmessage.message',".*Notification\(COMPLETED\).*"),_time,null()))) as cdx_time latest(eval(if(match('cwmessage.message',".*Notification\(UPDATED\).*"),_time,null()))) as upd_time by cwmessage.transId | eval cdx=cdx_time-start_time, upd=upd_time-cdx_time | table cwmessage.transId, cdx,upd
I have comma separated list in Lookup table so after reading value from lookup table, can I do following ?  index=foo | eval NAME_LIST="task1,task2,task3" | search NAME IN (NAME_LIST)  
You should able to do that with enclosing your field name with dollar sign ($) $Application Server$="running" Refer to: https://community.splunk.com/t5/Splunk-Search/Search-field-names-with-s... See more...
You should able to do that with enclosing your field name with dollar sign ($) $Application Server$="running" Refer to: https://community.splunk.com/t5/Splunk-Search/Search-field-names-with-spaces-in-map-command-inner-search/m-p/241379#M71778
I have the UFW 9.2.0.1 and still got the OpenSSL 1.0.2zi-fips, it's def not the same version you are pointing here. And to be sure I checked runing the splunk cmd openssl version. 
It is not clear to me what your expected output would look like. Please can you share an example?
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format w... See more...
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format with commas the sum(*) as *. How I can do that? Thank you
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end b... See more...
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end both versions are identical and include obfuscated method names. This happened with multiple app releases and crashes. Locally I am perfectly able to retrace the stacktrace provided by AppDynamics with the uploaded mapping file.  Does someone have an idea what may be the reason for this? AppDynamics Gradle Plugin version: 23.6.0 Android Gradle Plugin version: 8.1.4
If that is correct, then the planet earth and all humanity is in the wrong hands.
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [se... See more...
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(COMPLETED)*") |stats latest(_time) as cdx_time by cwmessage.transId ] [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(UPDATeD)*") |stats latest(_time) as upd_time by cwmessage.transId ] |join cwmessage.transId |eval cdx=cdx_time-start_time, upd=upd_time-cdx_time |table cwmessage.transId, cdx,upd From above query I'm using index query in multiple times, i want to use it as base search and call that in all nested searches for the dashboard. Please help me. Thanks
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag... See more...
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag: 'tail.containers.**' type: 'record_modifier' body: | # replace key log <replace> key log expression /"traffic_http_auth".*?:.*?".+?"/ # replace string replace "\"traffic_http_auth\": \"auth cleared\"" </replace>       Now since the above charts support ended we have switched to splunk-otel-collector. Along with this we also switched the logsengine: otel  and now having a hard time replicating this modification. Per the documentation I read this should come via processors (which is the agent), please correct me if I am wrong here. I have tried two processors but both doesn't work.  What I am missing here?     logsengine: otel agent: enabled: true config: processors: attributes/log_body_regexp: actions: - key: traffic_http_auth action: update value: "obfuscated" transform: log_statements: - context: log statements: - set(traffic_http_auth, "REDACTED")       This is new to me, can anyone point me where this logs modifiers can be applied.  Thanks, Ppal      
We are also having the error below: Error occurred while connecting to eventhub: CBS Token authentication failed We were told that Splunk wasn't hitting AZ FW at all. Did you solve that? If so was ... See more...
We are also having the error below: Error occurred while connecting to eventhub: CBS Token authentication failed We were told that Splunk wasn't hitting AZ FW at all. Did you solve that? If so was it a network opening? Please share so other can fix as well. 
Here is my sample. I want to get all saved search then from the returned result I want to filter in the field called "search" to find searchstring that contains something like "| collect". So    ... See more...
Here is my sample. I want to get all saved search then from the returned result I want to filter in the field called "search" to find searchstring that contains something like "| collect". So     | where (search LIKE "%| collect%")   do the job Full Search String:   | rest /servicesNS/-/-/saved/searches | table title, cron_schedule next_scheduled_time eai:acl.owner actions eai:acl.app action.email action.email.to dispatch.earliest_time dispatch.latest_time search | where (search LIKE "%| collect%") Add-On Let's say I want to filter search a field called "action.summary_index" for the value equals to 1, I can do as below. Enclose the field name with dollar sign ($) | rest /servicesNS/-/-/saved/searches | table title, cron_schedule next_scheduled_time eai:acl.owner actions eai:acl.app action.email action.email.to dispatch.earliest_time dispatch.latest_time search * | where $action.summary_index$ = "1"  
SentinelOne App v5.2, are there any guides or KB articles written on configuring SentinelOne App? Can't seem to find any information on this anywhere. My understanding is that a service account needs... See more...
SentinelOne App v5.2, are there any guides or KB articles written on configuring SentinelOne App? Can't seem to find any information on this anywhere. My understanding is that a service account needs to be created with a previlaged role and then from there the API key is generated. SentinelOne app will need the console URL and the API key. Am I missing anything?
If Agrupamento is a multi-value field, it will be counted for each value in the multivalue field | makeresults | eval field=split("AA","") | stats count by field _time
Hi @vinihei_987 , are yousure that in some events you have only one Agrupamento? probaby they are more than one in some (or all) events, so you have a total greter than events. Ciao. Giuseppe
It's not clear what the problem is.  Are you seeing repeated results or are the counts twice the expected values?  It may help to share sanitized output.