All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When trying to create a Search Head Cluster on Ubuntu 20.04 with Splunk Enterprise 8.2.2.2 I receive a init error.  It seems splunk is not able to use init on my system. If I run the following comma... See more...
When trying to create a Search Head Cluster on Ubuntu 20.04 with Splunk Enterprise 8.2.2.2 I receive a init error.  It seems splunk is not able to use init on my system. If I run the following command on my first search head server: splunk_admin@server1:/opt/splunk/bin$ sudo ./splunk init Command error: 'init' is not a valid command. Please run 'splunk help' to see the valid commands. The full comand to intialize the search does  not work either.  I just posted the command with init since Splunk can not see it or have proper rights to it? Other commands seem to work: splunk_admin@server:/opt/splunk/bin$ sudo ./splunk list Command error: Additional arguments are needed for the 'list' command. Please type "splunk help list" for usage and examples. Running command under root: root@server1:/opt/splunk/bin# ./splunk init Command error: 'init' is not a valid command. Please run 'splunk help' to see the valid commands. Splunk Enterprise is running under splunk account. splunk_admin@server1:/opt/splunk/bin$ sudo ps -elf|grep splunkd 1 S splunk 39839 1 2 80 0 - 158573 ep_pol 08:20 ? 00:10:33 splunkd -p 8089 start 1 S splunk 39840 39839 0 80 0 - 25243 ep_pol 08:20 ? 00:00:14 [splunkd pid=39839] splunkd -p 8089 start [process-runner] 0 S splunk 40102 39840 0 80 0 - 47271 poll_s 08:20 ? 00:00:42 /opt/splunk/bin/splunkd instrument-resource-usage -p 8089 --with-kvstore 0 S splunk_+ 115584 108329 0 80 0 - 1608 pipe_w 15:22 pts/0 00:00:00 grep --color=auto splunkd Any advice on solving this issue will be greatly appreciated.           
Hi How can I tune this spl command? this spl execute daily, and return something like this: servername send                                          receive                                     cu... See more...
Hi How can I tune this spl command? this spl execute daily, and return something like this: servername send                                          receive                                     customer                                                      ID    status Customer4 2021-21-11 12:12:39  2021-21-11 12:15:03  CUS.AaBB-APP1-12345_CUS    10  144.772000 Customer3 2021-21-11 12:09:58  2021-21-11 12:12:03  CUS.AaBB-APP1-98765_CUS     20 125.616000   here is statics belong this query: events  72,070,802 (11/21/21 12:00:00.000 AM to 11/22/21 12:00:00.000 AM) Size  2.09 GB Statistics (248,138)    it take huge time to return result is there any way to tune query or any trick that return this result faster? FYI: I try to use summer index but still take long time to return result.   Here is my query: index="myindex" source="/data/product/*/customer*" (date_hour>=1 AND (date_hour<23 OR (date_hour=23 date_minute<30))) "Packet Processed" OR "Normal Packet Received" | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | rex ID\[(?<ID>\d+) | rex "^(?<timestamp>.{23}) INFO (?<customer>.*) \[AppServiceName\] (?<status>.*): M\[(?<Acode>.*)\] T\[(?<Bcode>\d+)\]" | rex field=customer "_(?<customer2>.*)" | eval customer2=coalesce(customer2,customer), customer=if(customer=customer2,null(),customer) | eval sendTime=if(status="Packet Processed",strptime(timestamp,"%Y-%m-%d %H:%M:%S,%3Q"),null()), receiveTime=if(status="Normal Packet Received",strptime(timestamp,"%Y-%m-%d %H:%M:%S,%3Q"),null()) | eval AcodeSend=if(status="Packet Processed",Acode,null()),BcodeSend=if(status="Packet Processed",Bcode,null()),AcodeReceive=if(status="Normal Packet Received",Acode,null()),BcodeReceive=if(status="Normal Packet Received",Bcode,null()) | eval AcodeReceiveLookFor=AcodeSend+10,acr=coalesce(AcodeReceive,AcodeReceiveLookFor) | fields - Acode _time timestamp status AcodeReceiveLookFor | stats values(*) as *,count by customer2,acr,Bcode | eval duration=receiveTime-sendTime , customer=coalesce(customer,customer2) | eval status=case(isnull(AcodeSend),"No Send",isnull(AcodeReceive),"No receive") | eventstats max(duration) as duration by customer2 | where count=2 OR (status="No receive" AND isnull(duration)) | eval status=coalesce(status,duration) | search NOT status="No receive" | search NOT status="No Send" | search status>2 | eval send=strftime(sendTime, "%Y-%d-%m %H:%M:%S") | eval receive=strftime(receiveTime, "%Y-%d-%m %H:%M:%S") | table servername send receive customer ID status   Any idea? Thanks
Hi I have logs in below format, which is mix of delimiter (|) and json. now I want to extract statuscode and statuscodevalue and create table with columns _time,statuscodevalue,statuscode. can some... See more...
Hi I have logs in below format, which is mix of delimiter (|) and json. now I want to extract statuscode and statuscodevalue and create table with columns _time,statuscodevalue,statuscode. can someone please help me ? 2021-11-22 05:52:09.755 INFO - c.t.c.a.t.service.UserInfoService(101) - abcd | abcd-APP | /user-info | af4772c0-1fcd-4a82-858e-c2f7f0821724 | APP | -| Response of validateAddress abcd Service: { "headers" : { }, "body" : { "baseError" : { "code" : "3033", "reason" : "User is unauthorized", "explanation" : "Unauthorized" } }, "statusCode" : "UNAUTHORIZED", "statusCodeValue" : 401 }
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 30 sec) to update the entire dashboard.   Please Help!   { "dataSources": { "ds_s... See more...
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 30 sec) to update the entire dashboard.   Please Help!   { "dataSources": { "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021|stats latest(COUNT) as 2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021App|stats latest(COUNT) as 2021App", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OF|stats latest(COUNT) as 2021OF", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OC|stats latest(COUNT) as 2021OC", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OD|stats latest(COUNT) as 2021OD", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OF|stats latest(COUNT) as 2021OF", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OH|stats latest(COUNT) as 2021OH", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021Oss|stats latest(COUNT) as 2021Oss", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=Apply2021|stats latest(COUNT) as Apply2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppError|stats latest(COUNT) as 2021AppError", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021Completed|stats latest(COUNT) as 2021Completed", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppRefNotCompleted|stats latest(COUNT) as 2021AppRefNotCompleted", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppReturned|stats latest(COUNT) as 2021AppReturned", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalApp2021|stats latest(COUNT) as TotalApp2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalSApp2021|stats latest(COUNT) as TotalSApp2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021NT|stats latest(COUNT) as 2021NT", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalAI2021|stats latest(COUNT) as TotalAI2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalAS2021|stats latest(COUNT) as TotalAS2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1": { "type": "ds.search", "options": { "query": "index=jd1 source=2021NInApply|stats latest(COUNT) as 2021NInApply", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_pEQkMQUp_ds_search_1": { "type": "ds.search", "options": { "query": "index=jd1 source=2021|table COUNT|dedup COUNT", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_6wDR22mI_ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021C|stats latest(COUNT) as 2021C", "queryParameters": { "earliest": "-15m", "latest": "now" } } } }, "visualizations": { "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "StO" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "FRO" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "CANCEL" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "Dp" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "FAIL" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "H/T" }, "viz_single_1_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new" }, "title": "S/S" }, "viz_single_1_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new" }, "title": "Current" }, "viz_single_1_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new" }, "title": "SAP" }, "viz_single_1_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new" }, "title": "Application C" }, "viz_single_1_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new" }, "title": "Application not c" }, "viz_single_1_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new" }, "title": "Applications returned" }, "viz_single_1_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new" }, "title": "Applied" }, "viz_single_1_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new" }, "title": "Applied s" }, "viz_table_1_new": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search_1_new_new_new" }, "title": "Applications S2" }, "viz_single_1_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new" }, "title": "Applied I" }, "viz_single_1": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new" }, "title": "Applied s3" }, "viz_table_1": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search_1" }, "options": { "sparklineDisplay": "after" }, "title": "Total A" }, "viz_Qa9CUq0z": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_6wDR22mI_ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "CL" } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "height": 800 }, "structure": [ { "item": "viz_table_1", "type": "block", "position": { "x": 510, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_single_1", "type": "block", "position": { "x": 730, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_single_1_new", "type": "block", "position": { "x": 950, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_table_1_new", "type": "block", "position": { "x": 510, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new", "type": "block", "position": { "x": 730, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new", "type": "block", "position": { "x": 950, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new", "type": "block", "position": { "x": 730, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new", "type": "block", "position": { "x": 950, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new", "type": "block", "position": { "x": 510, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new", "type": "block", "position": { "x": 510, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 950, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 180, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 350, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 520, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 690, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 860, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 1030, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 730, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_Qa9CUq0z", "type": "block", "position": { "x": 10, "y": 660, "w": 150, "h": 130 } } ], "globalInputs": [ "input_global_trp" ] }, "title": "my view", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } } } Thanks 
Our Splunk Indexer is under resourced. To match Splunk support's recommendations we need to add more RAM to it. We have a deployment server with 2 indexer & 2 search head. This upgrade will require a... See more...
Our Splunk Indexer is under resourced. To match Splunk support's recommendations we need to add more RAM to it. We have a deployment server with 2 indexer & 2 search head. This upgrade will require about 30 minutes of downtime . What's the best approach for the hardware upgrade?
I have two separate search queries which are working separately but when i am trying to get data by joining them its not giving me any result from second query. first query-  index=ads sourcetype="... See more...
I have two separate search queries which are working separately but when i am trying to get data by joining them its not giving me any result from second query. first query-  index=ads sourcetype="sequel" | eval jobname="Job for p1" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "s_total_count:(?P<cnts>[^ ]+)" | rex field=_raw "origin_cnt_date:(?P<dte>[\D]+[\d]+[ ][\d]+[:]+[\d]+[:]+[\d]+[ ][\D]+[\d]+)" | eval date= strptime(dte, "%a %B %d %H:%M:%S") | eval dates=strftime(date, "%Y-%m-%d") | fields db tb cnts dates jobname | where cnts>0 | table dates jobname db tb cnts second query- index=ads sourcetype="isosequel" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "count:(?P<cnt>[^ ]+)" | eval jobname1="Job for p2" | stats sum(cnt) as tb_cnt by jobname1 db tb | fields jobname1 db tb tb_cnt |table jobname1 db tb tb_cnt joined query(not working as expected)- index=ads sourcetype="sequel" | eval jobname="Job for p1" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "s_total_count:(?P<cnts>[^ ]+)" | rex field=_raw "origin_cnt_date:(?P<dte>[\D]+[\d]+[ ][\d]+[:]+[\d]+[:]+[\d]+[ ][\D]+[\d]+)" | eval date= strptime(dte, "%a %B %d %H:%M:%S") | eval dates=strftime(date, "%Y-%m-%d") | fields db, tb, cnts, dates, jobname | join type=inner db tb [ search(index=ads sourcetype="isosequel") | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "count:(?P<cnt>[^ ]+)" | rex field=_raw "jobname:Job for (?P<jb>[a-z_A-Z0-9]+)" | stats sum(cnt) as tb_cnt by jb db tb | fields db, tb, tb_cnt, jb] | eval diff = cnts-tb_cnt | table dates, jobname, jb, db, tb, cnts, tb_cnt, diff requirement- I want to compare each db ,table with the second query db, table and get the difference, but i am not getting any result out of second query. any help would be appreciated !!!   Thankyou in Advance !!  
Hi Everyone.  Apologies if this answer is on the forum somewhere.  We are trying to pass a field value to an alert title which will be used by the Pagerduty integration, which uses the title of the a... See more...
Hi Everyone.  Apologies if this answer is on the forum somewhere.  We are trying to pass a field value to an alert title which will be used by the Pagerduty integration, which uses the title of the alert as the title of the Pagerduty Incident.    We have tried $result.field_name$ & $field_name$ - with no joy.  $result.field_name$ works no problem when using it with the custom details section for the integration.     This is the Pagerduty guide if anyone needs it for reference: https://www.pagerduty.com/docs/guides/splunk-integration-guide/   Really appreciate any help.     Thanks, Sam    
I want to send an alert when  response time > 10 sec is more than 2% of total transaction in an hour could you please suggest proper query to achieve the above requirement.
Hi Guys,   We are using “Microsoft Azure App for Splunk” by getting inputs from “Splunk Add-on for Microsoft Cloud Services” and “Microsoft Azure Add-on for Splunk”. We have created the inputs for ... See more...
Hi Guys,   We are using “Microsoft Azure App for Splunk” by getting inputs from “Splunk Add-on for Microsoft Cloud Services” and “Microsoft Azure Add-on for Splunk”. We have created the inputs for Audit logs and security center logs and we are getting the data in dashboards “Microsoft Azure App for Splunk”  as expected. But one problem is we don’t get to see or select “Subscription” dropdown menu even though we are seeing data of multiple subscription on dashboards. In “Subscription” dropdown menu we are seeing a message as “Search produced no result”.  Any idea why is like this?  Or any bug in that app? Please suggest.    
HI  i am getting below response from my Splunk query, please refer below screenshot If you see the above screenshot you can see the result is 92.20%,my requirement is i need to send an al... See more...
HI  i am getting below response from my Splunk query, please refer below screenshot If you see the above screenshot you can see the result is 92.20%,my requirement is i need to send an alert when ever the percentage is below98.00% so could you please suggest proper query in order to trigger an alert when ever the result from query is below 98.00% in a time frame of 1 hour
Hi Everyone,   I set splunk(on windows) lab envirement because try something threat activity.I need to take powershell logs on splunk but there is no option application and services log on splunk w... See more...
Hi Everyone,   I set splunk(on windows) lab envirement because try something threat activity.I need to take powershell logs on splunk but there is no option application and services log on splunk web gui (data input).     I want to take this logs.      
My query , index=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid _time | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare... See more...
My query , index=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid _time | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare_Agent_Version = "","Not Covered","Covered")  | search OS="*" Group="*" Name="***" Environment="*" | timechart span=1d count by Status | addtotals | eval "Covered %"=round((Covered/Total)*100,2) | eval "Not Covered %"=round(('Not Covered'/Total)*100,2) | fields _time "Covered %" "Not Covered %" The above search not providing expected count as i get i get for Status count  as below , iindex=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid  | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare_Agent_Version = "","Not Covered","Covered")  | search OS="*" Group="*" Name="***" Environment="*" | stats count by Status | eventstats sum(*) as sum_* | foreach * [ eval "Status %"=round((count/sum_count)*100,2)] | rename count as Count | fields - sum_count | sort - Count I think i am missing  something in timechart search .How to get he exact count for timechart as in below search using stats alone.  
Hello,  I'm trying to filter one lookup with the values of an other lookup. This is the situation: Lookup roles.csv contains the field with the security roles I would like to check for: Role... See more...
Hello,  I'm trying to filter one lookup with the values of an other lookup. This is the situation: Lookup roles.csv contains the field with the security roles I would like to check for: Role  role1 role2 role3 role6 Lookup AssignedRoles.csv contains a field with all the assigned roles  User Role User1 role2 role5 User2 role6 User3 role9 role8 User4 role7 role4 User5 role1 role2   Now I want to return a table with all the users in AssignedRoles.csv that have an assigned Role from Roles.csv Can anybody help me with an example query, if it is at all possible? Thanks, Robin
Hi Friends, does anyone know the Application for cyberoam firewall? After selling Cyberoam to Sophos, other Application  have been removed from Splank site and there is only one addon. i need Cybe... See more...
Hi Friends, does anyone know the Application for cyberoam firewall? After selling Cyberoam to Sophos, other Application  have been removed from Splank site and there is only one addon. i need Cyberoam Application for splunk
I'm busting my head and I can't seem to get any where. I currently have all my F5 logs going into sourcetype f5:bigip:syslog and I would like to split the data into 2 and create 2 new sourcetypes, I'... See more...
I'm busting my head and I can't seem to get any where. I currently have all my F5 logs going into sourcetype f5:bigip:syslog and I would like to split the data into 2 and create 2 new sourcetypes, I'd like to do that based on the format of the data. Is there someone who can explain how to go about this? Basically I want to pull out the APM and http logs into the 2 new sourcetypes, that is what I want to achieve.  Logs are being sent in syslog via a UF so I know I need to do this on the Indexers. Will I have to create a custom app?
Hello team,    I am facing an issue while trying to extract the below events. Please help in this.   Event: 150022 High 2021.11.22 03:32:44 App Proxy: Utilization of preprocessing manager proces... See more...
Hello team,    I am facing an issue while trying to extract the below events. Please help in this.   Event: 150022 High 2021.11.22 03:32:44 App Proxy: Utilization of preprocessing manager processes over 80% prd-Server06 1.2.3.4 Utilization of preprocessing manager internal processes, in % 100 %   Extraction used: ^(?:[^:\n]*:){2}\d+\s+(?P<field1>[^\t]+)(?:[^\.\n]*\.){3}\d+\s+(?P<field2>[^ ]+)(?:[^ \n]* ){7}\%\s(?P<field3>.+)   Although all other fields are extracted as expected. The field2 is unable to extract the highlighted/underlined field. Please let me know how I may fix the field2 here.
Hi! I have a setup where I must clone and forward data to a third party. Can somebody clarify if I disable useACK that even though a destination is unreachable that the flow to other outputs does no... See more...
Hi! I have a setup where I must clone and forward data to a third party. Can somebody clarify if I disable useACK that even though a destination is unreachable that the flow to other outputs does not stall? If I look at the spec of outputs.conf, for me it's not fully clear what happens when I write to the socket and the destination is unreachable:   * When set to "false", the forwarder considers the data fully processed when it finishes writing it to the network socket.   Thanks in advance!
Hi Team,   I did a registration on splunk-victorops. But I couldn't be able to verify my mobile number (verification code is not getting received). Given my account details below.   Email - dann... See more...
Hi Team,   I did a registration on splunk-victorops. But I couldn't be able to verify my mobile number (verification code is not getting received). Given my account details below.   Email - danny@prezzee.com User Name - danny_hamshananth Contact Number - +94 75 074 0708 (Sri Lanka)   Need your prompt support.   Thanks & Regards, Danny Hamshananth
Splunk Trial on Linux WSL 20.04 The splunk file has been downloaded, extracted and when starting splunk there is the following error: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit ... See more...
Splunk Trial on Linux WSL 20.04 The splunk file has been downloaded, extracted and when starting splunk there is the following error: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. I have checked the location, opt/splunk/var/lib/splunk/audit/db and it contains the following files: test.2BREdp test.PNih3U
I have raw data, I would like to search for domains within the data, output it to a field and then run stats to show a count of each unique domain.  Example of raw data: "This investigation is ... See more...
I have raw data, I would like to search for domains within the data, output it to a field and then run stats to show a count of each unique domain.  Example of raw data: "This investigation is really great and we found the suspicious domain google.com" I would like to: 1. search for domains within raw data and output the domain to a field that I can show in a table (Lets call it "Domain") 2. run stats that show the number of occurrences So ideally, my finished result would be: Domain count google.com 50 yahoo.com 30   Any assistance is greatly appreciated, thank you.