All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "S... See more...
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "Snowflake\/(?<folder>[^\/]+)" this is the regex I'm using when i use it in the search it works perfect. but the main goal is to save this search as a permanent field. i know that the the field extraction draw from the "_raw" there is an option to direct the Cloud to pull from the source and save it a permanent field.
Hi @richgalloway  As you mentioned match condition in case statement.let me share the query.If i use match i am not getting the Status field index="mule" applicationName="api" environment=DEV time... See more...
Hi @richgalloway  As you mentioned match condition in case statement.let me share the query.If i use match i am not getting the Status field index="mule" applicationName="api" environment=DEV timestamp (message="onDemand Flow for concur Expense Report file with FileID Started") OR (message="Exchange Rates Scheduler process started") OR (message="Exchange Rates Process Completed. File successfully sent to Concur*") OR (message="DEV(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur") OR ("TEST(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur") OR ("PRD(SUCCESS): Exchange Rates Interface Run Report - Concur")|transaction correlationId| rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.payload.TargetFileName as TargetFileName | eval JobType=case(like('message',"%onDemand Flow for concur Expense Report file with FileID Started%"),"OnDemand",like('message',"%Exchange Rates Scheduler process started%"),"Scheduled", true() , "Unknown") | eval Status=case(like('message',"%Exchange Rates Process Completed. File sucessfully sent to Concur%"),"SUCCESS",match('message',"%(TEST|DEV|PRD)(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur%"),"SUCCESS",like('TracePoint',"%EXCEPTION%"),"ERROR") |eventstats min(Timestamp) AS Start_Time, max(Timestamp) AS End_Time by CorrelationId | eval StartTime=round(strptime(Start_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(End_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") |rename Start_Time as Timestamp | table Status JobType ElapsedTimeInSecs "Total Elapsed Time" Timestamp CorrelationId message TargetFileName  
If they are not already extracted, you need to extract the trace number and error codes etc. If you need help with this, you will need to share some representative anonymised versions of your events,... See more...
If they are not already extracted, you need to extract the trace number and error codes etc. If you need help with this, you will need to share some representative anonymised versions of your events, with details of what you want extracted e.g. what part of the event goes into which field.
Try variations on your query to see if you isolate the source or sourcetype that is causing the spike. index=_internal metrics kb series!=_* "group=per_source_thruput" earliest=-30d | eval mb = kb ... See more...
Try variations on your query to see if you isolate the source or sourcetype that is causing the spike. index=_internal metrics kb series!=_* "group=per_source_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series index=_internal metrics kb series!=_* "group=per_index_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series
This is probably down to your data sources. You should check for patterns of increased logging by your apps at the weekends, other activity on the hosts, etc. Can you narrow down the time periods whe... See more...
This is probably down to your data sources. You should check for patterns of increased logging by your apps at the weekends, other activity on the hosts, etc. Can you narrow down the time periods when the increase in logging occurs? Do you have any batch jobs running at these times which might account for the additional data? You need to investigate the nature of the increase further.
HI, I would like to know how can I create a new filter by field like "slack channel name" / phantom artifact id?  how its been created?  thanks!  
Hi @richgalloway  Yes, if anything will let you know Thanks
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run t... See more...
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run this command: index=_internal metrics kb series!=_* "group=per_host_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series and it shows the daily ingest for numerous forwarders.  During the week it averages out but over the weekend it exceeds my daily ingest limit causing warnings.  I would like to be able to find out what the cause is and a possible solution so I can even out the ingestion so I dont get violations.   Much appreciated for any assistance!
They are not extracted. They are part of log entries. Also is there a possibility to display complete Error or exception on last column?   TraceNumber   Error     Exception    ReturnCode Complete/E... See more...
They are not extracted. They are part of log entries. Also is there a possibility to display complete Error or exception on last column?   TraceNumber   Error     Exception    ReturnCode Complete/Error or Exception 11111                  YES          NO                   YES               Full Exception.................................... 1234                     YES          NO                    YES              Full Error........................
I started using Process Monitor from SysInternals to see what files the migration tool had problems with and when I fixed permissions on those the problem was fixed.  
Since all the lookups appear to be the same, why not do the lookup first, then evaluate (with your conditions) whether the results are worth keeping?
Hi @jorma ,   There are a few things you need to check for certificates. Check if the certificate is pointing to the right file in web.conf If you have a different name for the Splunk web URL, m... See more...
Hi @jorma ,   There are a few things you need to check for certificates. Check if the certificate is pointing to the right file in web.conf If you have a different name for the Splunk web URL, make sure that you have all of them the SAN part of the certificate file. I had encountered both of the above issues and once I made the change, our Splunk instance was working perfectly.   Thanks, Pravin
Hi, I have the same problem, did you find a solution? 
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a partic... See more...
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a particular search. We tried doing it using the SID but that is not matching.  Can someone help me out here based on your experiences.
I have fields aa, bb, cc, dd, hostname and sometime few filed value may be null in payload. What i want to do. if (aa, bb is not null) than lookup abc.csv name output name hostname ip if (cc, dd i... See more...
I have fields aa, bb, cc, dd, hostname and sometime few filed value may be null in payload. What i want to do. if (aa, bb is not null) than lookup abc.csv name output name hostname ip if (cc, dd is not null)  than lookup abc.csv name output name hostname ip if hostname=echo than lookup abc.csv name output name hostname ip Here is the catch, if 1st if condition is executed it should ignore 2nd & 3rd. if 2nd if statement executed than 3rd should ignored. Like wise i have to go upto 10 if condition.    
Hello, Is this still working with 4.0.2 version of the app ? I made the change, but unfortunately, nothing happens. How do you trigger the deletion? Restart Splunk? Unfortunately, I can no longer m... See more...
Hello, Is this still working with 4.0.2 version of the app ? I made the change, but unfortunately, nothing happens. How do you trigger the deletion? Restart Splunk? Unfortunately, I can no longer make any backups because I get an error saying that I have reached the size limit for backups. Thanks for your help.
Good point - not easy to use in a case statement though
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0... See more...
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0.2 but have still reported this issue. Is there any known problem that would cause this?
Have you tried opening the searches from the dashboard in a search window? Try building up the search line by line until you find where the data is no longer available. If it is not available from ... See more...
Have you tried opening the searches from the dashboard in a search window? Try building up the search line by line until you find where the data is no longer available. If it is not available from the start, try checking that you have the correct permissions to see the data.
Yes you can using the lookup eval command https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.... See more...
Yes you can using the lookup eval command https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 It has to come from a CSV, you cannot use KV store lookups