All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Folks, I am facing the issue where I am not able to see red bar in the below panel. The count is for each hour and error count is mostly 1 or 2 events per hour. how can I make red bar visible? ... See more...
Hi Folks, I am facing the issue where I am not able to see red bar in the below panel. The count is for each hour and error count is mostly 1 or 2 events per hour. how can I make red bar visible? Any help or suggestion please?    
This app worked for about a day then started giving us this error: 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/sp... See more...
This app worked for about a day then started giving us this error: 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" raise ConnectionError(err, request=request) 11-18-2021 06:04:27.982 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 11-18-2021 06:04:28.019 -0500 ERROR ExecProcessor [44632 ExecProcessor] - message from "/proj/app/splunk/bin/python3.7 /proj/app/splunk/etc/apps/TA-MS_Defender/bin/microsoft_defender_atp_alerts.py" ERROR('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))   Any ideas on what would cause this error?
Id=xyz id=ABC id=EDC Id=FIS index=* event=*| eval id = case(id = "xyz" , "one", id = "ABC", "Two")|eval index=case(index="work_prod","PROD",index="work_qa","QA")|table id, index, status |stats co... See more...
Id=xyz id=ABC id=EDC Id=FIS index=* event=*| eval id = case(id = "xyz" , "one", id = "ABC", "Two")|eval index=case(index="work_prod","PROD",index="work_qa","QA")|table id, index, status |stats count(eval(status ="success")) AS Success, count(eval(status ="failure")) AS Failure BY id, index |rename index as Env, id as Application_name I am using above query to get Application name and count of failures and success. Result I am seeing: Application_name Env Success Failure one                              Prod  100   2 Two                             QA      20    10   I have more than 2 id's but since I am eval only two id's  it is giving only two id's as output. How can I get the rest?  Expecting result: Application_name Env Success Failure one                              Prod  100   2 Two                             QA      20    10 EDC                            QA      20    10 FIS                               PROD      20    10    
Hi Folks, I tried to configure the aws add-on on my subscription but I received this error for cloudtrail log. message="Failed to download file" Splunk Version=8.2.0 Input type=SQS-Based S3 Aws ... See more...
Hi Folks, I tried to configure the aws add-on on my subscription but I received this error for cloudtrail log. message="Failed to download file" Splunk Version=8.2.0 Input type=SQS-Based S3 Aws add-on ver= 5.0 any suggestions? any check on policy site in aws consolle?
Hi there, I am new to splunk. I and was wondering how to find the difference in time from the last time a forwarder sent a log until now and if the host has not sent a log in 5 min, set status to of... See more...
Hi there, I am new to splunk. I and was wondering how to find the difference in time from the last time a forwarder sent a log until now and if the host has not sent a log in 5 min, set status to offline? I am trying to achieve  this Expected outcome of time comparison:   Thank you 
I've got a situation that I thought I understood but clearly don't. I have logs that look like this: 2021-11-22 14:00:00 Event=InventoryComplete ComputerName=Server1 ComputerName=Server2 ComputerNam... See more...
I've got a situation that I thought I understood but clearly don't. I have logs that look like this: 2021-11-22 14:00:00 Event=InventoryComplete ComputerName=Server1 ComputerName=Server2 ComputerName=ServerN I thought that ComputerName would automatically be a multivalue field due to there being multiple copies of that Key=Value pair and I'd be able to search any of the values. And I thought there are instances where this works automatically, but it's not right now. | search sourcetype=inventory_audit ComputerName=Server1 ```works``` | search sourcetype=inventory_audit ComputerName=Server2 ```no results``` | search sourcetype=inventory_audit "ComputerName=Server2" ``` forcing text search works``` Is there something I can do to make these events implicitly multivalue? Ideally for the entire sourcetype regardless of the specific field name, as this sourcetype covers a wide variety of audit logs with different object classes.
Hello. I have two indexes and three users.  Each user is in specific AD group.  Each group is mapped to a respective role.  Each role gives access to a specific index(es). user_a can only search in... See more...
Hello. I have two indexes and three users.  Each user is in specific AD group.  Each group is mapped to a respective role.  Each role gives access to a specific index(es). user_a can only search index_a, user_b only index_b and user_c can search both.  Restricting access to indexes is important. Indexes index_a index_b AD group user_idx_a (user_a is a member) user_idx_b (user_b is a member) user_idx_all (user_c is a member) Users and roles user_a has role role_a user_b has role role_b user_c has role role_c authorize.conf [role_role_a] importRoles = user srchIndexesDefault = index_a srchIndexesDisallowed = index_b [role_role_b] importRoles = user srchIndexesDefault = index_b srchIndexesDisallowed = inbex_a [role_role_c] importRoles = user That's all fine. But as more indexes are added, I wonder how this will scale, especially where, for example, user_d needs access to newly created index_d, plus say index_a.  I will now need a new AD group (user_idx_d), a new role (role_role_a_d) and a suitable entry in authorize.conf.  I've gained some mileage from putting index restrictions on the inherited (user) role.  For example: [role_user] srchIndexesDisallowed = main;splunklogger;summary I had thought I would put users in multiple AD groups, but whilst membership brings a new role / index, it also means I end up with conflicting 'disallowed' directives. Is there a better way?  Or have I reduced the administration to the minimum whilst maintaining index access granularity? Many thanks.
I want to add users to splunk via a DL and they need to be assigned with roles.
Good morning. I support a application development effort which is transitioning from Elasticsearch to Splunk.  I would like to setup a POC test instance/cluster of splunk on our dev network. With e... See more...
Good morning. I support a application development effort which is transitioning from Elasticsearch to Splunk.  I would like to setup a POC test instance/cluster of splunk on our dev network. With elastic, I would dimply download an RPM ang get started. With splunk, it is unclear to me how to get started (reading docs), regarding licensing and which files I can download. Apologies for the low level questions, but where can I get started?  Which file can I download to start install an instance, and hopefully created a small (3/4 node) cluster for POC? Thanks, Larry
Hi all, I have the following problem set: I have an index that rolls out data every 30 days (ie data older than 30 days is removed). There is a subset of data from this index that I would like to q... See more...
Hi all, I have the following problem set: I have an index that rolls out data every 30 days (ie data older than 30 days is removed). There is a subset of data from this index that I would like to query for a longer period of time, say 12 or 24 months.  I'm fairly new to the idea of summary indexes, but it sounds like the logical solution. However, I'm concerned about losing previous data (that's been removed from the original index) each time the summary index is scheduled to run. Is there a way for a summary index to store the data from old runs so I can build a dataset that encompasses multiple months from the original index?    Thanks in advance!
When trying to create a Search Head Cluster on Ubuntu 20.04 with Splunk Enterprise 8.2.2.2 I receive a init error.  It seems splunk is not able to use init on my system. If I run the following comma... See more...
When trying to create a Search Head Cluster on Ubuntu 20.04 with Splunk Enterprise 8.2.2.2 I receive a init error.  It seems splunk is not able to use init on my system. If I run the following command on my first search head server: splunk_admin@server1:/opt/splunk/bin$ sudo ./splunk init Command error: 'init' is not a valid command. Please run 'splunk help' to see the valid commands. The full comand to intialize the search does  not work either.  I just posted the command with init since Splunk can not see it or have proper rights to it? Other commands seem to work: splunk_admin@server:/opt/splunk/bin$ sudo ./splunk list Command error: Additional arguments are needed for the 'list' command. Please type "splunk help list" for usage and examples. Running command under root: root@server1:/opt/splunk/bin# ./splunk init Command error: 'init' is not a valid command. Please run 'splunk help' to see the valid commands. Splunk Enterprise is running under splunk account. splunk_admin@server1:/opt/splunk/bin$ sudo ps -elf|grep splunkd 1 S splunk 39839 1 2 80 0 - 158573 ep_pol 08:20 ? 00:10:33 splunkd -p 8089 start 1 S splunk 39840 39839 0 80 0 - 25243 ep_pol 08:20 ? 00:00:14 [splunkd pid=39839] splunkd -p 8089 start [process-runner] 0 S splunk 40102 39840 0 80 0 - 47271 poll_s 08:20 ? 00:00:42 /opt/splunk/bin/splunkd instrument-resource-usage -p 8089 --with-kvstore 0 S splunk_+ 115584 108329 0 80 0 - 1608 pipe_w 15:22 pts/0 00:00:00 grep --color=auto splunkd Any advice on solving this issue will be greatly appreciated.           
Hi How can I tune this spl command? this spl execute daily, and return something like this: servername send                                          receive                                     cu... See more...
Hi How can I tune this spl command? this spl execute daily, and return something like this: servername send                                          receive                                     customer                                                      ID    status Customer4 2021-21-11 12:12:39  2021-21-11 12:15:03  CUS.AaBB-APP1-12345_CUS    10  144.772000 Customer3 2021-21-11 12:09:58  2021-21-11 12:12:03  CUS.AaBB-APP1-98765_CUS     20 125.616000   here is statics belong this query: events  72,070,802 (11/21/21 12:00:00.000 AM to 11/22/21 12:00:00.000 AM) Size  2.09 GB Statistics (248,138)    it take huge time to return result is there any way to tune query or any trick that return this result faster? FYI: I try to use summer index but still take long time to return result.   Here is my query: index="myindex" source="/data/product/*/customer*" (date_hour>=1 AND (date_hour<23 OR (date_hour=23 date_minute<30))) "Packet Processed" OR "Normal Packet Received" | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | rex ID\[(?<ID>\d+) | rex "^(?<timestamp>.{23}) INFO (?<customer>.*) \[AppServiceName\] (?<status>.*): M\[(?<Acode>.*)\] T\[(?<Bcode>\d+)\]" | rex field=customer "_(?<customer2>.*)" | eval customer2=coalesce(customer2,customer), customer=if(customer=customer2,null(),customer) | eval sendTime=if(status="Packet Processed",strptime(timestamp,"%Y-%m-%d %H:%M:%S,%3Q"),null()), receiveTime=if(status="Normal Packet Received",strptime(timestamp,"%Y-%m-%d %H:%M:%S,%3Q"),null()) | eval AcodeSend=if(status="Packet Processed",Acode,null()),BcodeSend=if(status="Packet Processed",Bcode,null()),AcodeReceive=if(status="Normal Packet Received",Acode,null()),BcodeReceive=if(status="Normal Packet Received",Bcode,null()) | eval AcodeReceiveLookFor=AcodeSend+10,acr=coalesce(AcodeReceive,AcodeReceiveLookFor) | fields - Acode _time timestamp status AcodeReceiveLookFor | stats values(*) as *,count by customer2,acr,Bcode | eval duration=receiveTime-sendTime , customer=coalesce(customer,customer2) | eval status=case(isnull(AcodeSend),"No Send",isnull(AcodeReceive),"No receive") | eventstats max(duration) as duration by customer2 | where count=2 OR (status="No receive" AND isnull(duration)) | eval status=coalesce(status,duration) | search NOT status="No receive" | search NOT status="No Send" | search status>2 | eval send=strftime(sendTime, "%Y-%d-%m %H:%M:%S") | eval receive=strftime(receiveTime, "%Y-%d-%m %H:%M:%S") | table servername send receive customer ID status   Any idea? Thanks
Hi I have logs in below format, which is mix of delimiter (|) and json. now I want to extract statuscode and statuscodevalue and create table with columns _time,statuscodevalue,statuscode. can some... See more...
Hi I have logs in below format, which is mix of delimiter (|) and json. now I want to extract statuscode and statuscodevalue and create table with columns _time,statuscodevalue,statuscode. can someone please help me ? 2021-11-22 05:52:09.755 INFO - c.t.c.a.t.service.UserInfoService(101) - abcd | abcd-APP | /user-info | af4772c0-1fcd-4a82-858e-c2f7f0821724 | APP | -| Response of validateAddress abcd Service: { "headers" : { }, "body" : { "baseError" : { "code" : "3033", "reason" : "User is unauthorized", "explanation" : "Unauthorized" } }, "statusCode" : "UNAUTHORIZED", "statusCodeValue" : 401 }
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 30 sec) to update the entire dashboard.   Please Help!   { "dataSources": { "ds_s... See more...
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 30 sec) to update the entire dashboard.   Please Help!   { "dataSources": { "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021|stats latest(COUNT) as 2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021App|stats latest(COUNT) as 2021App", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OF|stats latest(COUNT) as 2021OF", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OC|stats latest(COUNT) as 2021OC", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OD|stats latest(COUNT) as 2021OD", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OF|stats latest(COUNT) as 2021OF", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021OH|stats latest(COUNT) as 2021OH", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021Oss|stats latest(COUNT) as 2021Oss", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=Apply2021|stats latest(COUNT) as Apply2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppError|stats latest(COUNT) as 2021AppError", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021Completed|stats latest(COUNT) as 2021Completed", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppRefNotCompleted|stats latest(COUNT) as 2021AppRefNotCompleted", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021AppReturned|stats latest(COUNT) as 2021AppReturned", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalApp2021|stats latest(COUNT) as TotalApp2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalSApp2021|stats latest(COUNT) as TotalSApp2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021NT|stats latest(COUNT) as 2021NT", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalAI2021|stats latest(COUNT) as TotalAI2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1_new": { "type": "ds.search", "options": { "query": "index=jd1 source=TotalAS2021|stats latest(COUNT) as TotalAS2021", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_search_1": { "type": "ds.search", "options": { "query": "index=jd1 source=2021NInApply|stats latest(COUNT) as 2021NInApply", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_pEQkMQUp_ds_search_1": { "type": "ds.search", "options": { "query": "index=jd1 source=2021|table COUNT|dedup COUNT", "queryParameters": { "earliest": "-15m", "latest": "now" } } }, "ds_6wDR22mI_ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "ds.search", "options": { "query": "index=jd1 source=2021C|stats latest(COUNT) as 2021C", "queryParameters": { "earliest": "-15m", "latest": "now" } } } }, "visualizations": { "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "StO" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "FRO" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "CANCEL" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "Dp" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "FAIL" }, "viz_single_1_new_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "H/T" }, "viz_single_1_new_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new_new" }, "title": "S/S" }, "viz_single_1_new_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new_new" }, "title": "Current" }, "viz_single_1_new_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new_new" }, "title": "SAP" }, "viz_single_1_new_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new_new" }, "title": "Application C" }, "viz_single_1_new_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new_new" }, "title": "Application not c" }, "viz_single_1_new_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new_new" }, "title": "Applications returned" }, "viz_single_1_new_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new_new" }, "title": "Applied" }, "viz_single_1_new_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new_new_new" }, "title": "Applied s" }, "viz_table_1_new": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search_1_new_new_new" }, "title": "Applications S2" }, "viz_single_1_new": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new_new" }, "title": "Applied I" }, "viz_single_1": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_search_1_new" }, "title": "Applied s3" }, "viz_table_1": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search_1" }, "options": { "sparklineDisplay": "after" }, "title": "Total A" }, "viz_Qa9CUq0z": { "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "below", "trendDisplay": "absolute", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "unitPosition": "after", "shouldUseThousandSeparators": true }, "dataSources": { "primary": "ds_6wDR22mI_ds_search_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new" }, "title": "CL" } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "height": 800 }, "structure": [ { "item": "viz_table_1", "type": "block", "position": { "x": 510, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_single_1", "type": "block", "position": { "x": 730, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_single_1_new", "type": "block", "position": { "x": 950, "y": 20, "w": 150, "h": 150 } }, { "item": "viz_table_1_new", "type": "block", "position": { "x": 510, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new", "type": "block", "position": { "x": 730, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new", "type": "block", "position": { "x": 950, "y": 180, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new", "type": "block", "position": { "x": 730, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new", "type": "block", "position": { "x": 950, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new", "type": "block", "position": { "x": 510, "y": 340, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new", "type": "block", "position": { "x": 510, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 950, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 180, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 350, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 520, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 690, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 860, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 1030, "y": 660, "w": 150, "h": 130 } }, { "item": "viz_single_1_new_new_new_new_new_new_new_new_new_new_new_new_new_new_new", "type": "block", "position": { "x": 730, "y": 500, "w": 150, "h": 150 } }, { "item": "viz_Qa9CUq0z", "type": "block", "position": { "x": 10, "y": 660, "w": 150, "h": 130 } } ], "globalInputs": [ "input_global_trp" ] }, "title": "my view", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } } } Thanks 
Our Splunk Indexer is under resourced. To match Splunk support's recommendations we need to add more RAM to it. We have a deployment server with 2 indexer & 2 search head. This upgrade will require a... See more...
Our Splunk Indexer is under resourced. To match Splunk support's recommendations we need to add more RAM to it. We have a deployment server with 2 indexer & 2 search head. This upgrade will require about 30 minutes of downtime . What's the best approach for the hardware upgrade?
I have two separate search queries which are working separately but when i am trying to get data by joining them its not giving me any result from second query. first query-  index=ads sourcetype="... See more...
I have two separate search queries which are working separately but when i am trying to get data by joining them its not giving me any result from second query. first query-  index=ads sourcetype="sequel" | eval jobname="Job for p1" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "s_total_count:(?P<cnts>[^ ]+)" | rex field=_raw "origin_cnt_date:(?P<dte>[\D]+[\d]+[ ][\d]+[:]+[\d]+[:]+[\d]+[ ][\D]+[\d]+)" | eval date= strptime(dte, "%a %B %d %H:%M:%S") | eval dates=strftime(date, "%Y-%m-%d") | fields db tb cnts dates jobname | where cnts>0 | table dates jobname db tb cnts second query- index=ads sourcetype="isosequel" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "count:(?P<cnt>[^ ]+)" | eval jobname1="Job for p2" | stats sum(cnt) as tb_cnt by jobname1 db tb | fields jobname1 db tb tb_cnt |table jobname1 db tb tb_cnt joined query(not working as expected)- index=ads sourcetype="sequel" | eval jobname="Job for p1" | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "s_total_count:(?P<cnts>[^ ]+)" | rex field=_raw "origin_cnt_date:(?P<dte>[\D]+[\d]+[ ][\d]+[:]+[\d]+[:]+[\d]+[ ][\D]+[\d]+)" | eval date= strptime(dte, "%a %B %d %H:%M:%S") | eval dates=strftime(date, "%Y-%m-%d") | fields db, tb, cnts, dates, jobname | join type=inner db tb [ search(index=ads sourcetype="isosequel") | rex field=_raw "schema:(?P<db>[^ ]+)" | rex field=_raw "table:(?P<tb>[^ ]+)" | rex field=_raw "count:(?P<cnt>[^ ]+)" | rex field=_raw "jobname:Job for (?P<jb>[a-z_A-Z0-9]+)" | stats sum(cnt) as tb_cnt by jb db tb | fields db, tb, tb_cnt, jb] | eval diff = cnts-tb_cnt | table dates, jobname, jb, db, tb, cnts, tb_cnt, diff requirement- I want to compare each db ,table with the second query db, table and get the difference, but i am not getting any result out of second query. any help would be appreciated !!!   Thankyou in Advance !!  
Hi Everyone.  Apologies if this answer is on the forum somewhere.  We are trying to pass a field value to an alert title which will be used by the Pagerduty integration, which uses the title of the a... See more...
Hi Everyone.  Apologies if this answer is on the forum somewhere.  We are trying to pass a field value to an alert title which will be used by the Pagerduty integration, which uses the title of the alert as the title of the Pagerduty Incident.    We have tried $result.field_name$ & $field_name$ - with no joy.  $result.field_name$ works no problem when using it with the custom details section for the integration.     This is the Pagerduty guide if anyone needs it for reference: https://www.pagerduty.com/docs/guides/splunk-integration-guide/   Really appreciate any help.     Thanks, Sam    
I want to send an alert when  response time > 10 sec is more than 2% of total transaction in an hour could you please suggest proper query to achieve the above requirement.
Hi Guys,   We are using “Microsoft Azure App for Splunk” by getting inputs from “Splunk Add-on for Microsoft Cloud Services” and “Microsoft Azure Add-on for Splunk”. We have created the inputs for ... See more...
Hi Guys,   We are using “Microsoft Azure App for Splunk” by getting inputs from “Splunk Add-on for Microsoft Cloud Services” and “Microsoft Azure Add-on for Splunk”. We have created the inputs for Audit logs and security center logs and we are getting the data in dashboards “Microsoft Azure App for Splunk”  as expected. But one problem is we don’t get to see or select “Subscription” dropdown menu even though we are seeing data of multiple subscription on dashboards. In “Subscription” dropdown menu we are seeing a message as “Search produced no result”.  Any idea why is like this?  Or any bug in that app? Please suggest.    
HI  i am getting below response from my Splunk query, please refer below screenshot If you see the above screenshot you can see the result is 92.20%,my requirement is i need to send an al... See more...
HI  i am getting below response from my Splunk query, please refer below screenshot If you see the above screenshot you can see the result is 92.20%,my requirement is i need to send an alert when ever the percentage is below98.00% so could you please suggest proper query in order to trigger an alert when ever the result from query is below 98.00% in a time frame of 1 hour