All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can I extract the below field from raw: The Total Process Time to publish in Kafka topic is (milli-sec)=5 The Total Process Time(milli-sec)=1 I want to extarct 1 and 5. How can I fetch someon... See more...
How can I extract the below field from raw: The Total Process Time to publish in Kafka topic is (milli-sec)=5 The Total Process Time(milli-sec)=1 I want to extarct 1 and 5. How can I fetch someone can guide me?
Let's say the data looks like: StudentName StudentId Grade ExamDate Tom 1 60 2021-04-01 Jerry 2 70 2021-04-01 Tom 1 62 2021-04-07 Jerry 2 55 2021-04-07 And the the ... See more...
Let's say the data looks like: StudentName StudentId Grade ExamDate Tom 1 60 2021-04-01 Jerry 2 70 2021-04-01 Tom 1 62 2021-04-07 Jerry 2 55 2021-04-07 And the the result I want looks like: Formatted Tom,1:2021-04-01,60;2021-04-07,62 Jerry,2;2021-04-01,70;2021-04-07,55 I want to divide the origin data into groups by key "StudentId", and then merge the contents in each group to make a formatted string Of course, I can get all the data and write a program in Python or Java to process it.... But it would be better if I can do this only with SPL I have written a script to group by "StudentId":     transaction StudentId | stats list(_raw) as rawList by StudentId     But don't know what to do next
Hello, I'm struggling with the way to make efficient alerts trigger with SPL.   I made splunk dashboard to visualize our Server, Storage, Network's usage data. Those data are being collected on a ... See more...
Hello, I'm struggling with the way to make efficient alerts trigger with SPL.   I made splunk dashboard to visualize our Server, Storage, Network's usage data. Those data are being collected on a daily basis with python script and splunk is monitoring it.   I want to get alert if any one of these server/storage/network device's usage expected to go over 100% in the future(tried to use predict command).   But since predict command does not support multiple prediction in one time and I can't make every one of those alert for each device(over 100 servers and storages....), I need another proper solution to solve this problem.   What would be a best way to make alert trigger for each device ?  Thank you!        
We are pulling some data from REST using REST API Modular Input (splunkbase.splunk.com/app/1546/), Response type json, and receiving the below response   { currentServerTime: 2021-05-07T07:01:3... See more...
We are pulling some data from REST using REST API Modular Input (splunkbase.splunk.com/app/1546/), Response type json, and receiving the below response   { currentServerTime: 2021-05-07T07:01:35.652+0000 measurements: [ { count: 0 open: true resultId: CSA_S_FT_L_ANY time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_ANY time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_7 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_6 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_5 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_4 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_3 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_10 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_2 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_1 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { calculatedTimeInSeconds: 0 count: 0 open: true resultId: CSA_N_REG_L_2 time: 00:10:00 timeInSeconds: 600 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_1 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_10 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_4 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_9 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_3 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_8 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_6 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { calculatedTimeInSeconds: 0 count: 0 open: true resultId: CSA_N_FT_L_8 time: 00:05:00 timeInSeconds: 300 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_5 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_8 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_7 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_FT_L_10 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_REG_L_9 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_FT_L_9 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_REG_L_ANY time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_FT_L_3 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_FT_L_2 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_S_FT_L_1 time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } { count: 0 open: true resultId: CSA_N_FT_L_ANY time: 00:00:00 timeInSeconds: 0 updated: 2021-05-07T07:01:00.000+0000 } ] }   We would like to split each individual result into individual events using "updated" as the timestamp, however, no matter what I have tried, I can't get Splunk to break the events.  I've tried writing a custom response handler, but it's not working, this isn't my area of expertise so i'm really struggling! This is what I have written.   class BlipTrackHandler: def __init__(self,**args): pass def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint): if response_type == "json": output = json.loads(raw_response_output) for measurement in output["measurements"]: measurement["timestamp"] = output["measurements"]["updated"] print_xml_stream(json.dumps(measurement)) else: print_xml_stream(raw_response_output)   Is anyone able to help?  
Hi All, Hoping you all are well and safe!   I am building a dashboard which requires me to place a logo(in a .png format) onto the space i have circled in yellow in a Splunk dashboard.     ... See more...
Hi All, Hoping you all are well and safe!   I am building a dashboard which requires me to place a logo(in a .png format) onto the space i have circled in yellow in a Splunk dashboard.     Is this possible through plain XML?   Please help me with how i can achieve this?   Thanks is advance!
I need to combine logs from multiple events based on unique field and trigger an alert if order is missing in events generation. Example : If there is any transaction then events should generate in... See more...
I need to combine logs from multiple events based on unique field and trigger an alert if order is missing in events generation. Example : If there is any transaction then events should generate in like wise order depending on reference number. First event should be initiation followed by debit followed by verification followed by creditor verification and then money debit and credit into accounts. Based on this i need to combine all the results in one order and trigger an alert if any event missed in this order. Can anyone please help me on this????
<search id="base_query_filter"> <query>       Index=a,sourcetype=x,eval y=A+B</query> </search> <search id="base_query"> <query> index=a,sourcetype=x,eval y=A+B -(here can i consider the base_q... See more...
<search id="base_query_filter"> <query>       Index=a,sourcetype=x,eval y=A+B</query> </search> <search id="base_query"> <query> index=a,sourcetype=x,eval y=A+B -(here can i consider the base_query_filter base search) join type =inner max=0(index=b,sourtype=y)</query> <search> Is it possible to consider one base search in another base search id? Thank You in advance Renuka
Hi, I'm trying to line break events and extract time stamp, but it has no date any ideas how to get this? [04:05:16.255][t] setting data time stamp [04:05:14.255][t] setting data time stampwewe22... See more...
Hi, I'm trying to line break events and extract time stamp, but it has no date any ideas how to get this? [04:05:16.255][t] setting data time stamp [04:05:14.255][t] setting data time stampwewe22 [04:05:12.255][t] setting data time etc  <PSET>CDPTLSGSG <cc>
I have 2 servers that receive the logs through Syslog and through a universal forwarder I forward them to 2 indexers. I have 2 search heads that are not in cluster mode, they are in "Standalone" On... See more...
I have 2 servers that receive the logs through Syslog and through a universal forwarder I forward them to 2 indexers. I have 2 search heads that are not in cluster mode, they are in "Standalone" Only Splunk Enterprise is installed in one search head, and Splunk Enterprise and Splunk Enterprise Security are installed in the other. For obvious reasons, they both have the peer pointing to both indexers. What makes me curious is why if I make a query of any type of Query SPL in the search head where Splunk ES is, it shows that it went to search more logs than in the search head where I only have Splunk.  
Hi Guys, I got my query right and I see my values properly populate on the dropdown input. However, I cant pass this token to populate my query on to the next query. Is there something wrong with my... See more...
Hi Guys, I got my query right and I see my values properly populate on the dropdown input. However, I cant pass this token to populate my query on to the next query. Is there something wrong with my xml?  <input type="dropdown" token="sourceName" searchWhenChanged="true"> <label>Select SourceName</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>sourcename</fieldForLabel> <fieldForValue>sourcename</fieldForValue> <search> <query>index="example_of_idx" sourcename=Service_Requested OR sourcename=Service_Suggested | stats count by sourcename | replace Service_Requested WITH "Type of Service" Service_Suggested WITH "Type of Service Suggested" IN sourcename | table sourcename </query> </search> </input>
Hi team  I tried the below spl eval command  index=aws Website="*" | stats count(eval(match(User_Agent, "Firefox"))) as "Firefox", count(eval(match(User_Agent, "Chrome"))) as "Chrome", count(e... See more...
Hi team  I tried the below spl eval command  index=aws Website="*" | stats count(eval(match(User_Agent, "Firefox"))) as "Firefox", count(eval(match(User_Agent, "Chrome"))) as "Chrome", count(eval(match(User_Agent, "Safari"))) as "Safari", count(eval(match(User_Agent, "MSIE"))) as "IE", count(eval(match(User_Agent, "Trident"))) as "Trident", count(eval(NOT match(User_Agent, "Chrome|Firefox|Safari|MSIE|Trident"))) as "Other" | transpose | sort by User_Agent When i use this to my Splunk script, it gives all data to "Other". Firefox=0, Chrome=0 IE=0,   Thanks  
Hi Team  I am trying to extract the OS details from the user agent using the below eval command, however I am not able to see new filed was created (test) after i executed the spl query index=aws W... See more...
Hi Team  I am trying to extract the OS details from the user agent using the below eval command, however I am not able to see new filed was created (test) after i executed the spl query index=aws Website="*" | eval test = case(match(useragent,"Windows .. 5\.1"),"Windows XP",match(useragent,"droid"),"Android",match(useragent,"Windows NT 6.1"),"Windows 7") Any help please   Thank s
So I have this very strange problem. We have 2 SearchHead environments. 1 SearchHead Cluster(7) and a Standalone Dev SearchHead. They both are connected to the same indexer Cluster.   A new problem... See more...
So I have this very strange problem. We have 2 SearchHead environments. 1 SearchHead Cluster(7) and a Standalone Dev SearchHead. They both are connected to the same indexer Cluster.   A new problem appeared today. When users are searching for a key-value pair, Splunk stalls and does not bring up any results. Splunk does not terminate the search either, just stalls until I close the tab.   See below:   However, the moment sourcetype is mentioned, splunk brings back searches:   The issue only appears on the firewall index which has several sourcetypes in there. This issue does not appear in other indexes that contain multiple sourcetypes. Just the index=firewalls   I was curious if the problem resided in the search head layer or indexers. So I ran the same vague search  ("index=firewalls src="*" ) on our standalone SH and indexers which both returned normal results. So the issue seems to reside on the specific Search Head Cluster.   Any idea?    
Hello, Can anyone please help me with the line breaking and truncate issue which I am seeing for the nested Json events coming via HEC to splunk. This event size is almost close to 25 million bytes ... See more...
Hello, Can anyone please help me with the line breaking and truncate issue which I am seeing for the nested Json events coming via HEC to splunk. This event size is almost close to 25 million bytes where as the truncate limit is set to 10000 only. Due to this event is getting truncated.I was not allowed to set the truncate limit to 0 due to performance issues.I want to break this nested event into multiple events starting from Source_System Example of an event: {"sourcetype": "abc_json","index":"test", "event":{"severity":"INFO","logger":"org.mule.runtime.core.internal.processor.LoggerMessageProcessor","time":"XXX","thread":"[MuleRuntime].xxx.123: [App name].post:\\schedules:application\\json:app.CPU_INTENSIVE @xxxx","message":{"correlationId":"XXXX","inputPayload":[{"Source_System":"TEST","Created_By":"ESB","Created_Date_UTC":"1900-XX-01T02:59:14.783Z","Last_Updated_By":"ESB","Last_Updated_Date_UTC":"2020-07-25T03:34:31.91Z",]},{"Source_System":"TEST2","Created_By":"ESB","Created_Date_UTC":"1900-XX-07T02:59:14.783Z","Last_Updated_By":"ESB","Last_Updated_Date_UTC":"1900-XX-25T03:34:31.91Z",]},{"Source_System":"TEST3","Created_By":"ESB","Created_Date_UTC":"2019-08-22T23:27:32.123Z","Last_Updated_By":"ESB","Last_Updated_Date_UTC":"1900-xx-20T01:11:45.35Z",]}}}}'   My current props.conf configuration: ADD_EXTRA_TIME_FIELDS=True ANNOTATE_PUNCT=true AUTO_KV_JSON=true BREAK_ONLY_BEFORE_DATE=null CHARSET=UTF-8 DEPTH_LIMIT=1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME=false LB_CHUNK_BREAKER_TRUNCATE=2000000 LEARN_MODEL=true LEARN_SOURCETYPE=true LINE_BREAKER=([,|[]){"Source_System": LINE_BREAKER_LOOKBEHIND=100 MATCH_LIMIT=100000 MAX_DAYS_AGO=2000 MAX_DAYS_HENCE=2 MAX_DIFF_SECS_AGO=3600 MAX_DIFF_SECS_HENCE=604800 MAX_EVENTS=256 MAX_TIMESTAMP_LOOKAHEAD=128 NO_BINARY_CHECK=true SEGMENTATION=indexing SEGMENTATION-all=full SEGMENTATION-inner=inner SEGMENTATION-outer=outer SEGMENTATION-raw=none SEGMENTATION-standard=standard SHOULD_LINEMERGE=false TRUNCATE=10000 category=Custom detect_trailing_nulls=false disabled=false maxDist=100 pulldown_type=true termFrequencyWeightedDist=false     Am i missing something? Any help would be highly appreciated.   Thanks
How do I connect Splunk APM to logs and other resources?
How do I configure Business Workflows in Splunk APM?
How do I instrument Azure app services with extensions in Splunk APM or Splunk Observability Cloud?
In Splunk APM or Splunk Observability Cloud, how do I export spans from an inferred service?
How do I export spans from a service mesh for Splunk APM and Splunk Observability Cloud?
How do I export spans from an AWS Lambda function in Splunk SPM and Splunk Observability Cloud?