All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLD... See more...
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLDING/CE") | eval Timestamp=strftime(_time, "%Y-%m-%d") | stats count by Agrupamento, Timestamp | sort -Timestamp I already tried dedup and when I count only by Timestamp it works fine
How do I read which account is selected or get the username and password of the selected account? I am not able to find any document on this. I am only hardcoding the account name in my code. #... See more...
How do I read which account is selected or get the username and password of the selected account? I am not able to find any document on this. I am only hardcoding the account name in my code. # get_auth = helper.get_user_credential_by_id('account0')
I have the same issue, Did you manage to find a solution for it? I right now do  helper.get_user_credential_by_id(' <id_name>')
The match function treats "%" as a literal character rather than as a wildcard.  Instead, match uses regular expressions.  Remove the "%" from the match string and you should get a status value.
If the rex command works perfectly then you should have a field called "folder" with the extracted data in it.  Is that what is happening?  If not, please describe how the rex command is not acting a... See more...
If the rex command works perfectly then you should have a field called "folder" with the extracted data in it.  Is that what is happening?  If not, please describe how the rex command is not acting as expected.  Note that the "folder" field will be present only within the query that extracted it.  If you need the field to be available to all queries then it will have to be extracted at index-time using a transform.
Hi @tamir, you have to create a new field using the following syntax: Snowflake\/(?<folder>[^\/]+) in source in few words you have to add "in" and the firld to use for the extraction. ciao. Gius... See more...
Hi @tamir, you have to create a new field using the following syntax: Snowflake\/(?<folder>[^\/]+) in source in few words you have to add "in" and the firld to use for the extraction. ciao. Giuseppe
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "S... See more...
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "Snowflake\/(?<folder>[^\/]+)" this is the regex I'm using when i use it in the search it works perfect. but the main goal is to save this search as a permanent field. i know that the the field extraction draw from the "_raw" there is an option to direct the Cloud to pull from the source and save it a permanent field.
Hi @richgalloway  As you mentioned match condition in case statement.let me share the query.If i use match i am not getting the Status field index="mule" applicationName="api" environment=DEV time... See more...
Hi @richgalloway  As you mentioned match condition in case statement.let me share the query.If i use match i am not getting the Status field index="mule" applicationName="api" environment=DEV timestamp (message="onDemand Flow for concur Expense Report file with FileID Started") OR (message="Exchange Rates Scheduler process started") OR (message="Exchange Rates Process Completed. File successfully sent to Concur*") OR (message="DEV(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur") OR ("TEST(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur") OR ("PRD(SUCCESS): Exchange Rates Interface Run Report - Concur")|transaction correlationId| rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.payload.TargetFileName as TargetFileName | eval JobType=case(like('message',"%onDemand Flow for concur Expense Report file with FileID Started%"),"OnDemand",like('message',"%Exchange Rates Scheduler process started%"),"Scheduled", true() , "Unknown") | eval Status=case(like('message',"%Exchange Rates Process Completed. File sucessfully sent to Concur%"),"SUCCESS",match('message',"%(TEST|DEV|PRD)(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur%"),"SUCCESS",like('TracePoint',"%EXCEPTION%"),"ERROR") |eventstats min(Timestamp) AS Start_Time, max(Timestamp) AS End_Time by CorrelationId | eval StartTime=round(strptime(Start_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(End_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") |rename Start_Time as Timestamp | table Status JobType ElapsedTimeInSecs "Total Elapsed Time" Timestamp CorrelationId message TargetFileName  
If they are not already extracted, you need to extract the trace number and error codes etc. If you need help with this, you will need to share some representative anonymised versions of your events,... See more...
If they are not already extracted, you need to extract the trace number and error codes etc. If you need help with this, you will need to share some representative anonymised versions of your events, with details of what you want extracted e.g. what part of the event goes into which field.
Try variations on your query to see if you isolate the source or sourcetype that is causing the spike. index=_internal metrics kb series!=_* "group=per_source_thruput" earliest=-30d | eval mb = kb ... See more...
Try variations on your query to see if you isolate the source or sourcetype that is causing the spike. index=_internal metrics kb series!=_* "group=per_source_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series index=_internal metrics kb series!=_* "group=per_index_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series
This is probably down to your data sources. You should check for patterns of increased logging by your apps at the weekends, other activity on the hosts, etc. Can you narrow down the time periods whe... See more...
This is probably down to your data sources. You should check for patterns of increased logging by your apps at the weekends, other activity on the hosts, etc. Can you narrow down the time periods when the increase in logging occurs? Do you have any batch jobs running at these times which might account for the additional data? You need to investigate the nature of the increase further.
HI, I would like to know how can I create a new filter by field like "slack channel name" / phantom artifact id?  how its been created?  thanks!  
Hi @richgalloway  Yes, if anything will let you know Thanks
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run t... See more...
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run this command: index=_internal metrics kb series!=_* "group=per_host_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series and it shows the daily ingest for numerous forwarders.  During the week it averages out but over the weekend it exceeds my daily ingest limit causing warnings.  I would like to be able to find out what the cause is and a possible solution so I can even out the ingestion so I dont get violations.   Much appreciated for any assistance!
They are not extracted. They are part of log entries. Also is there a possibility to display complete Error or exception on last column?   TraceNumber   Error     Exception    ReturnCode Complete/E... See more...
They are not extracted. They are part of log entries. Also is there a possibility to display complete Error or exception on last column?   TraceNumber   Error     Exception    ReturnCode Complete/Error or Exception 11111                  YES          NO                   YES               Full Exception.................................... 1234                     YES          NO                    YES              Full Error........................
I started using Process Monitor from SysInternals to see what files the migration tool had problems with and when I fixed permissions on those the problem was fixed.  
Since all the lookups appear to be the same, why not do the lookup first, then evaluate (with your conditions) whether the results are worth keeping?
Hi @jorma ,   There are a few things you need to check for certificates. Check if the certificate is pointing to the right file in web.conf If you have a different name for the Splunk web URL, m... See more...
Hi @jorma ,   There are a few things you need to check for certificates. Check if the certificate is pointing to the right file in web.conf If you have a different name for the Splunk web URL, make sure that you have all of them the SAN part of the certificate file. I had encountered both of the above issues and once I made the change, our Splunk instance was working perfectly.   Thanks, Pravin
Hi, I have the same problem, did you find a solution? 
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a partic... See more...
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a particular search. We tried doing it using the SID but that is not matching.  Can someone help me out here based on your experiences.