All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You should able to do that with enclosing your field name with dollar sign ($) $Application Server$="running" Refer to: https://community.splunk.com/t5/Splunk-Search/Search-field-names-with-s... See more...
You should able to do that with enclosing your field name with dollar sign ($) $Application Server$="running" Refer to: https://community.splunk.com/t5/Splunk-Search/Search-field-names-with-spaces-in-map-command-inner-search/m-p/241379#M71778
I have the UFW 9.2.0.1 and still got the OpenSSL 1.0.2zi-fips, it's def not the same version you are pointing here. And to be sure I checked runing the splunk cmd openssl version. 
It is not clear to me what your expected output would look like. Please can you share an example?
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format w... See more...
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format with commas the sum(*) as *. How I can do that? Thank you
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end b... See more...
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end both versions are identical and include obfuscated method names. This happened with multiple app releases and crashes. Locally I am perfectly able to retrace the stacktrace provided by AppDynamics with the uploaded mapping file.  Does someone have an idea what may be the reason for this? AppDynamics Gradle Plugin version: 23.6.0 Android Gradle Plugin version: 8.1.4
If that is correct, then the planet earth and all humanity is in the wrong hands.
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [se... See more...
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(COMPLETED)*") |stats latest(_time) as cdx_time by cwmessage.transId ] [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(UPDATeD)*") |stats latest(_time) as upd_time by cwmessage.transId ] |join cwmessage.transId |eval cdx=cdx_time-start_time, upd=upd_time-cdx_time |table cwmessage.transId, cdx,upd From above query I'm using index query in multiple times, i want to use it as base search and call that in all nested searches for the dashboard. Please help me. Thanks
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag... See more...
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag: 'tail.containers.**' type: 'record_modifier' body: | # replace key log <replace> key log expression /"traffic_http_auth".*?:.*?".+?"/ # replace string replace "\"traffic_http_auth\": \"auth cleared\"" </replace>       Now since the above charts support ended we have switched to splunk-otel-collector. Along with this we also switched the logsengine: otel  and now having a hard time replicating this modification. Per the documentation I read this should come via processors (which is the agent), please correct me if I am wrong here. I have tried two processors but both doesn't work.  What I am missing here?     logsengine: otel agent: enabled: true config: processors: attributes/log_body_regexp: actions: - key: traffic_http_auth action: update value: "obfuscated" transform: log_statements: - context: log statements: - set(traffic_http_auth, "REDACTED")       This is new to me, can anyone point me where this logs modifiers can be applied.  Thanks, Ppal      
We are also having the error below: Error occurred while connecting to eventhub: CBS Token authentication failed We were told that Splunk wasn't hitting AZ FW at all. Did you solve that? If so was ... See more...
We are also having the error below: Error occurred while connecting to eventhub: CBS Token authentication failed We were told that Splunk wasn't hitting AZ FW at all. Did you solve that? If so was it a network opening? Please share so other can fix as well. 
Here is my sample. I want to get all saved search then from the returned result I want to filter in the field called "search" to find searchstring that contains something like "| collect". So    ... See more...
Here is my sample. I want to get all saved search then from the returned result I want to filter in the field called "search" to find searchstring that contains something like "| collect". So     | where (search LIKE "%| collect%")   do the job Full Search String:   | rest /servicesNS/-/-/saved/searches | table title, cron_schedule next_scheduled_time eai:acl.owner actions eai:acl.app action.email action.email.to dispatch.earliest_time dispatch.latest_time search | where (search LIKE "%| collect%") Add-On Let's say I want to filter search a field called "action.summary_index" for the value equals to 1, I can do as below. Enclose the field name with dollar sign ($) | rest /servicesNS/-/-/saved/searches | table title, cron_schedule next_scheduled_time eai:acl.owner actions eai:acl.app action.email action.email.to dispatch.earliest_time dispatch.latest_time search * | where $action.summary_index$ = "1"  
SentinelOne App v5.2, are there any guides or KB articles written on configuring SentinelOne App? Can't seem to find any information on this anywhere. My understanding is that a service account needs... See more...
SentinelOne App v5.2, are there any guides or KB articles written on configuring SentinelOne App? Can't seem to find any information on this anywhere. My understanding is that a service account needs to be created with a previlaged role and then from there the API key is generated. SentinelOne app will need the console URL and the API key. Am I missing anything?
If Agrupamento is a multi-value field, it will be counted for each value in the multivalue field | makeresults | eval field=split("AA","") | stats count by field _time
Hi @vinihei_987 , are yousure that in some events you have only one Agrupamento? probaby they are more than one in some (or all) events, so you have a total greter than events. Ciao. Giuseppe
It's not clear what the problem is.  Are you seeing repeated results or are the counts twice the expected values?  It may help to share sanitized output.
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLD... See more...
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLDING/CE") | eval Timestamp=strftime(_time, "%Y-%m-%d") | stats count by Agrupamento, Timestamp | sort -Timestamp I already tried dedup and when I count only by Timestamp it works fine
How do I read which account is selected or get the username and password of the selected account? I am not able to find any document on this. I am only hardcoding the account name in my code. #... See more...
How do I read which account is selected or get the username and password of the selected account? I am not able to find any document on this. I am only hardcoding the account name in my code. # get_auth = helper.get_user_credential_by_id('account0')
I have the same issue, Did you manage to find a solution for it? I right now do  helper.get_user_credential_by_id(' <id_name>')
The match function treats "%" as a literal character rather than as a wildcard.  Instead, match uses regular expressions.  Remove the "%" from the match string and you should get a status value.
If the rex command works perfectly then you should have a field called "folder" with the extracted data in it.  Is that what is happening?  If not, please describe how the rex command is not acting a... See more...
If the rex command works perfectly then you should have a field called "folder" with the extracted data in it.  Is that what is happening?  If not, please describe how the rex command is not acting as expected.  Note that the "folder" field will be present only within the query that extracted it.  If you need the field to be available to all queries then it will have to be extracted at index-time using a transform.
Hi @tamir, you have to create a new field using the following syntax: Snowflake\/(?<folder>[^\/]+) in source in few words you have to add "in" and the firld to use for the extraction. ciao. Gius... See more...
Hi @tamir, you have to create a new field using the following syntax: Snowflake\/(?<folder>[^\/]+) in source in few words you have to add "in" and the firld to use for the extraction. ciao. Giuseppe