All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp    
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troublesho... See more...
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troubleshoot the issue   Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing={ site3:1 }
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But bo... See more...
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But both of the events have one common field that is "Info.Title". How can i combine these 2 events?  
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup ... See more...
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup script (stored in /Splunk/etc/apps/$app/bin)? thanks,
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am try... See more...
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am trying to fetch all forwarded events from this windows server 2022 to my splunk indexer by splunk agent, but agent sends the events sometimes, not in real time. Can't see some errors in splunkforwarder events or in splunk indexer. Also I used Splunk_TA_Windows to fetch events.
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to... See more...
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to capture the network timings, resource loading and transaction times to name a few.  Is this possible with AppDynamics? If so, please help with required documentations around the same. Thanks. 
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm intereste... See more...
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm interested in understanding the typical time it takes for a Splunk forwarder to send a significant volume of data, say 10 GB, to the indexer. Are there any benchmarks or best practices for estimating this transfer time? Are there any factors or configurations that can significantly affect this transfer time? Expected EPS (Events Per Second): Additionally, I'm curious about the achievable Event Per Second (EPS) rates with Splunk forwarders. What are the typical EPS rates that organizations achieve in real-world scenarios? Are there any strategies or optimizations that can help improve EPS rates while maintaining stability and reliability? Any insights, experiences, or recommendations regarding these performance metrics would be greatly appreciated. Thank you!
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls joi... See more...
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls join and lets discuss monthly about Splunk and getting more value from the data. see you there. thanks.    Best Regards Sekar
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId... See more...
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId as "Verification ID", code as "HRC" | sort -_time   The issue is at BROWSER column where even when user access our app via Edge it still shows as Chrome. I found a dissimilarity between the two logs. One that is accessed via Edge contains "Edg" in the logs. Edge logs   metadata={BROWSER=Chrome, LOCALE=, OS=Windows, USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx Edg/124.0.0.0, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   Chrome logs   metadata={BROWSER=Chrome, LOCALE=, OS=Mac OS X, USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   My question is, how do i create a conditional search for BROWSER like if contains Edg then Edge else BROWSER?
hey guys, with data retention being set, is there a way to whitelist a specific container to prevent it from being deleted?
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (f... See more...
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (from 47 to 3)  AND ("a" in ("a"))   original code.  index=main_service ABC_DATASET Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false) AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability   I was expecting to see the same number of results.  Am I wrong about my expectations, or am I missing something here? TIA     index=main_service  ABC_DATASET  Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false)  AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any schedu... See more...
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any scheduled search if it fails on that particular scheduled time due to any issues like infra related or resource related it will be covered in next run. So am trying to capture the last status even after the durable logic applied.  Lets say I have 4 events. So the first two runs  (Scheduled_time=12345  AND Scheduled_time=12346)  of ALERT ABC failed. And in the third schedule during 12347 those two are covered and in that 12347 is also covered and all are success.  So if I take query like this first .. | stats last(status) by savedsearch_name scheduled_time I get output like this savedsearch_name last(status) scheduled_time ABC                    skipped                   12345 ABC                    skipped                   12346 ABC                    success                   12347   I need to write a logic that take A. jobs whose last status is not success - So here  ABC 12345 and ABC 12346 B. where durable_cursor != scheduled_time. So it will pick events for that job where multiple jobs covered for that missed duration. In this case here it will pick my EVENT 3  C. Then I have to derive like this. Take the failed saved search job name with its scheduled time in which its failed and check that scheduled_time falls within next durable_cursor and scheduled_time with status=success. .. TAKE FAILED SAVEDSEARCH NAME TIME as FAILEDTIME | where durable_cursor!=scheduled_time | eval Flag=if(FAILEDTIME>=durable_cursor OR FAILEDTIME<=scheduled_time, "COVERED", "NOT COVERED") with its schedule_time and check again if that job (with its job name) other scheduled time run falls betweee EVENT 4 : savedsearch_name = ABC ; status = success; scheduled_time =12347 EVENT 3 : savedsearch_name = ABC ; status = success ;  durable_cursor=12345 scheduled_time =12347 EVENT 2 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12346 EVENT 1 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12345 How I derived so far and where I stuck. I took this in two reports First report will take all the Jobs whose last status is not success and tabled output with fields SAVEDSEARCH NAME, SCHEDULEDTIME AS FAILEDTIME, LAST(STATUS) as FAILEDSTATU Then I saved this result in lookup Thsi has to run for last one hour window Second Report It will refer the lookup and take the failed savedsearch names from the lookup and search only those events in Splunk internal sets and search only the events where durable_cursor!=scheduled_time and then check if that failed savedsearch time falls within durable_cursor and next scheduled_time and check if status is success. Thsi is working fine if I have one savedsearch job for one time. But not for multivalues Lets say Job A itself is having four runs in an hour and except first all are failures. In this case I could not cover as referring values from lookup as multivalue field not matching the exact stuff Here is the question I posted for the same https://community.splunk.com/t5/Splunk-Search/How-to-retrieve-value-from-lookup-for-multivalue-field/m-p/684637#M233699   If somebody have any alternate or better thoughts on this can you please throw some light on this.
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data... See more...
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data daily, so there will not be old data with _time after the re-write? 2) Is it possible to use summary index without _time and make it like DBXquery?  The reason I do this is because I want to do data manipulation (split, etc)  and move it to another "placeholder" other than CSV or DBXquery, so I can perform correlation with another index.  For example:  | dbxquery query=" SELECT * from Table_Test"   the scheduled report for summary index will add something like this: summaryindex  spool=t  uselb=t  addtime=t  index="summary" file="test_file" name="test" marker="hostname=\"https://test.com/\",report=\"test\"" Please suggest. Thank you for your help.
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while add... See more...
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while adding data to splunk from a universal forwarder in windows server. I want to capture event logs from windows server to see in splunk. Please help me out as soon as possible. Thank you.
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/da... See more...
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=12 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s6 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s7 type=COUNTER value=2 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9:10 type=COUNTER value=8 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=140 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=3 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=1 t=1713291900 path="/data/p3/p4" stat=s20 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s21 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s22 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p3/p5" stat=s23 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s24 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s25 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p1/p5/p6" stat=s26 type=COUNTER value=253 t=1713291900 path="/data/p1/p5/p6" stat=s27 type=GAUGE value=1     t is the epoch time. path is the path of a URL which is in double quotes, always starts with /data/, and can have anywhere between 2 and 7 (maybe more) subpaths. stat is is either a single stat (like s20) OR a colon-delimited string of between 3 and 6 stat names. type is either COUNTER, TIMEELAPSED, or GAUGE. value is the metric. Right now I've been able to get a metric index set up that: Assigns t as the timestamp and ignores t as a dimension or metric Makes value the metric Makes path, stat, and type dimensions This is my transforms.conf:     [metrics_field_extraction] REGEX = ([a-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.\/:-]+) [metric-schema:cm_log2metrics_keyvalue] METRIC-SCHEMA-MEASURES = value METRIC-SCHEMA-WHITELIST-DIMS = stat,path,type METRIC-SCHEMA-BLACKLIST-DIMS = t     And props.conf (it's basically log2metrics_keyvalue, we need cm_ to match to our license):     [cm_log2metrics_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:cm_log2metrics_keyvalue TRANSFORMS-EXTRACT = metrics_field_extraction NO_BINARY_CHECK = true category = Log to Metrics description = '<key>=<value>' formatted data. Log-to-metrics processing converts the keys with numeric values into metric data points. disabled = false pulldown_type = 1      path and stat are extracted exactly as they appear in the logs. However, I'm wondering if it's possible to get each part in the path & stat fields into their own dimension, so instead of: _time path stat value type 4/22/24 2:20:00.000 PM /p1/p2/p3 s1:s2:s3 500 COUNTER   It would be: _time path1 path2 path3 stat1 stat2 stat3 value type 4/22/24 2:20:00.000 PM p1 p2 p3 s1 s2 s3 500 COUNTER   My thinking was that we'd be able to get really granular stats and interesting graphs. Thanks in advance!