All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do I configure instrumentation to export spans to Splunk APM and Splunk Observability Cloud?
What are the compatible span formats for Splunk AMP and Splunk Observability Cloud?
Is there a good example of how to instrument an application to export spans in Splunk APM & Splunk Observability Cloud?
How do I use Splunk APM and Splunk Observability Cloud to troubleshoot problems?
How do I get started with Splunk APM?
Wondering how to view your alerts or even customize them as part of Splunk Infrastructure Monitoring or IMM? (part of Splunk Observability Cloud!)
Wondering how to set up Detectors to Trigger Alerts with Splunk Infrastructure Monitoring, part of Splunk Observability Cloud?
Wondering where your metrics live in Splunk Infrastructure Monitoring or IMM, as part of the Splunk Observability Cloud? 
Wondering how you can create charts and dashboards in Splunk Infrastructure Monitoring or IMM, as part of the Splunk Observability Cloud offering?
Hello, I have configured a custom POJO rule to detect and split business transactions(refer to screenshot), but I noticed that some business transactions have IDs at the end of them(refer to scree... See more...
Hello, I have configured a custom POJO rule to detect and split business transactions(refer to screenshot), but I noticed that some business transactions have IDs at the end of them(refer to screenshot) Example: BT name: wso2./identity/accounts/4.0.0/users/profile/bca2fe18-e665-4d9b-a764-df3e839f6024  How to ignore this id in the custom POJO rule configured?
Wondering what built-in options does Splunk Infrastructure Monitoring or IMM provide to see your data, as part of the Splunk Observability Cloud offering? 
Wondering if there is a reference guide to help me understand functions and terminology for Splunk Infrastructure Monitoring or IMM as part of the Splunk Observability Cloud offering?
Been wondering about how to get started with Splunk Infrastructure Monitoring or IMM as part of the Splunk Observability offering?
Hello Splunkers, I have a bit of an issue onboarding some AWS Canaries from S3.  We have deployed the SQS/SNS and S3 and the files are coming in fine.  However each canary writes 4 files: TXT - de... See more...
Hello Splunkers, I have a bit of an issue onboarding some AWS Canaries from S3.  We have deployed the SQS/SNS and S3 and the files are coming in fine.  However each canary writes 4 files: TXT - detailed report -  I want this JSON - summary report - I want this PNG - image - I don't want this HTML - I don't want this Because they are all coming on a single queue they are getting the same sourcetype which is not working as they are very different structures.  As such I've built the following props: [aws:canaries:summary] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = AWS Canary JSON Summary File disabled = false pulldown_type = true [aws:canaries:detailed] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = (Start Canary) MAX_TIMESTAMP_LOOKAHEAD = 50 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_PREFIX = timestamp\:\s TRUNCATE = 0 category = Application description = AWS Canary Detailed TXT file disabled = false pulldown_type = true This is on the HF and the Indexer cluster. In order to separate and drop the files the following is also added to the props/transforms: Props: [aws:canaries] TRANSFORMS-aws_canaries = set_aws_canary_json, set_aws_canary_txt, drop_aws_canary_png, drop_aws_canary_html Transforms: ##### AWS Canaries - change sourcetype for SQS based S3 files and drop PNG files ##### [set_aws_canary_json] SOURCE_KEY = MetaData:Source REGEX = ^source::.*json FORMAT = sourcetype::aws:canaries:summary DEST_KEY = MetaData:Sourcetype [set_aws_canary_txt] SOURCE_KEY = MetaData:Source REGEX = ^source::.*txt FORMAT = sourcetype::aws:canaries:detailed DEST_KEY = MetaData:Sourcetype [drop_aws_canary_png] SOURCE_KEY = MetaData:Source REGEX = ^source::.*png FORMAT = nullQueue DEST_KEY = queue [drop_aws_canary_html] SOURCE_KEY = MetaData:Source REGEX = ^source::.*html FORMAT = nullQueue DEST_KEY = queue Again, both applied to HF and Indexers. On the inputs I have the following: [aws_sqs_based_s3://canaries] aws_account = SplunkForwarderRole aws_iam_role = canaries index = canaries interval = 300 s3_file_decoder = CustomLogs sourcetype = aws:canaries sqs_batch_size = 10 sqs_queue_region = us-east-1 sqs_queue_url = https://queue.amazonaws.com/1111111111/canaries disabled = 1   The idea being that the input receives the data and sourcetypes it aws:canaries then parsing/transorms alters the sourcetype. On the SH I can see the files are being source typed correctly (and dropped) however event breaking is not working... the JSON is everyline and it looks like the TXT file breaks at the date. Anyone configured something similar? I suspect the event breaking is happening as part of the aws:canaries sourcetype?  Just not sure Any help appreciated!!
Hello, I am trying to use a subsearch on another search but not sure how to format it properly Subsearch: eventtype=pan (https://link1.net OR https://link2.net OR https://link3.net) | rex field=u... See more...
Hello, I am trying to use a subsearch on another search but not sure how to format it properly Subsearch: eventtype=pan (https://link1.net OR https://link2.net OR https://link3.net) | rex field=url "LEN_(?<serial>\w+)" | fillnull value=NULL src_bunit, serial | fields src_bunit | dedup src_bunit | mvcombine src_bunit delim="," | nomv src_bunit | format The syntax shown from the format command is: ( src_bunit="A,B,C,D,E,F" ) )   On the main search I get this error: Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. The main search eventtype=dsp_inventory device_control_tags="IMPORTANT*" code IN([subsearch)    My question is how can a format the subsearch in a way that on the main search it will show results like?: A,B,C,D,E,F       instead of     src_bunit="A,B,C,D,E,F"       Any ideas? Thank you!
can anyone help me to write a Splunk query for when I have an outage I'd like a query executed that shows the duration of the outage.
i have a working query which is monitoring the success rate based off a value called app_id. i want to extend the current query i have and also show the success rate for each app_id but broken down b... See more...
i have a working query which is monitoring the success rate based off a value called app_id. i want to extend the current query i have and also show the success rate for each app_id but broken down by currentWeek, lastWeek, 2weeksago success rate percentage.   My current query is: index=jj3 "TRANSACTIONA" OR "TRANSACTIONB" | rex field=log "\"app_id\": \W(?<app_id>\w+)\W" | rex field=log "\"event_name\": \W(?<event_name>[a-zA-Z-|_|:]+)\W" | eval firstTransaction=if(event_name=="TRANSACTIONA", 1, 0) | eval secondTransaction=if(event_name=="TRANSACTIONB", 1, 0) | stats sum(firstTransaction) as TotalfirstTransaction sum(secondTransaction) as TotalsecondTransaction by app_id | dedup app_id | eval successRate=round(TotalsecondTransaction/TotalfirstTransaction*100, 1)."%" | fillnull successRate | sort - successRate | search NOT successRate=0
Hello guys, i'm working with a costumer which wants to replace arcsight with splunk. we're moving some systems from the arcsight and while we added "Fireglass" (by symantec) to monitoring we saw ex... See more...
Hello guys, i'm working with a costumer which wants to replace arcsight with splunk. we're moving some systems from the arcsight and while we added "Fireglass" (by symantec) to monitoring we saw extreme growth in the license which almost caused to violations. while digging in at the logs we saw that some sites, like youtube takes something like 1000 events for time frame of 1 minute. looking deeper i could see that all the video\audio traffic was sent as well. the customer told me that in Arcsight there's an option for "aggregation and filtration", which he can take the number of logs which are the same and merge them as one event, and ingest the whole traffic. here's an explanation about the operation from the Arcsight side: ArcSight. Optimizing EPS (Aggregation and Filtration) - SOC Prime my question: can i make something like this with splunk? with a license of 100G the fireglass takes like 60G. thanks in advance  Etai
Hello, I have events coming via HEC to Splunk cloud with event size 2641524, i see the sourcetype truncate limit was set to 10000 by default. Is it recommended to raise the truncate limit to 2700000... See more...
Hello, I have events coming via HEC to Splunk cloud with event size 2641524, i see the sourcetype truncate limit was set to 10000 by default. Is it recommended to raise the truncate limit to 2700000. Appreciate all your help.   Thanks  
I have a query where I can see in a snapshot current active users per VPN profile (group). Having a hard time being able to plot number of active sessions on a timechart. The timecharts I make show m... See more...
I have a query where I can see in a snapshot current active users per VPN profile (group). Having a hard time being able to plot number of active sessions on a timechart. The timecharts I make show me number of new connections, which is not what I am after. I am after seeing the total number of active connections per VPN profile (group) every 10 minutes for example.  SPL:  index=vpn_index message_id IN (113039 113019) group IN (ABC* XYZ* DEF* UVW*) | transaction Username keepevicted=true startswith="113039" endswith="113019" | eval session-status=if(closed_txn==1,"Completed","In Progress") | search message_id="113039" | fields src, _time, session-status, Username | search session-status="In Progress" | rename group as "VPN Profile" | stats count as "Active Sessions" by "VPN Profile" Any help is appreciated! - Adam