All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

There  is a bug on the Custom Dashboards page with the functionality of the "Show My Dashboards Only" check box.  The option either does not display the dashboards related or errors out. In earli... See more...
There  is a bug on the Custom Dashboards page with the functionality of the "Show My Dashboards Only" check box.  The option either does not display the dashboards related or errors out. In earlier versions of the Controller, enabling this option would apply a filter that would only show dashboards related to the current logged in user.  We have worked with AppD support on this to point out the error and support did acknowledge this, but in the following controller upgrades the error persists.  AppD Support ticket Request #335757 Created back on August the 11th. Pre-Prod Controller: AppDynamics Controller build 22.8.1-715 Prod Controller: AppDynamics Controller build 22.9.1-1134 Pre-prod: Before filter, dashboards for "dietrich" Pre-prod: After Filter Prod: Before filter, dashboards for "dietrich" Prod: After Filter
Hello All, I have email exchange transactional data with below fields. Looking some data with span of 1day. Like how many emails sent by users having attachment vs no attachment.  message_id, ema... See more...
Hello All, I have email exchange transactional data with below fields. Looking some data with span of 1day. Like how many emails sent by users having attachment vs no attachment.  message_id, email_id, attachment_count, recipient_name abc, nameA, 0, xyz   Expected result is : date(like dd/mm/yy), email_ID,  HasAttachmnetcount, NoAttachmnet count.  1/1/2022,nameA, 4, 3 I am able to write chart (over email_id by isattachmnet) and get data for the selected duration, but unable to list data splited day wise. 
Hi everyone, I am searching data in Splunk, after different steps, I have now this table:   _time count Type Mon Sep 12 00:00:00 2022 820 1 Mon Sep 12 00:00:00 2022 ... See more...
Hi everyone, I am searching data in Splunk, after different steps, I have now this table:   _time count Type Mon Sep 12 00:00:00 2022 820 1 Mon Sep 12 00:00:00 2022 885 2 Tue Sep 13 00:00:00 2022 773 1 Tue Sep 13 00:00:00 2022 922 2 Wed Sep 14 00:00:00 2022 825 1 Wed Sep 14 00:00:00 2022 844 2 Thu Sep 15 00:00:00 2022 748 1 Thu Sep 15 00:00:00 2022 943 2 Fri Sep 16 00:00:00 2022 794 1 Fri Sep 16 00:00:00 2022 890 2 Sat Sep 17 00:00:00 2022 684 1 Sat Sep 17 00:00:00 2022 793 2 Sun Sep 18 00:00:00 2022 737 1 Sun Sep 18 00:00:00 2022 795 2 Mon Sep 19 00:00:00 2022 764 1 Mon Sep 19 00:00:00 2022 890 2 Tue Sep 20 00:00:00 2022 792 1 Tue Sep 20 00:00:00 2022 876 2 Wed Sep 21 00:00:00 2022 754 1 Wed Sep 21 00:00:00 2022 853 2 Thu Sep 22 00:00:00 2022 784 1 Thu Sep 22 00:00:00 2022 883 2 Fri Sep 23 00:00:00 2022 731 1 Fri Sep 23 00:00:00 2022 820 2 Sat Sep 24 00:00:00 2022 691 1 Sat Sep 24 00:00:00 2022 788 2 Sun Sep 25 00:00:00 2022 726 1 Sun Sep 25 00:00:00 2022 762 2 Mon Sep 26 00:00:00 2022 403 1 Mon Sep 26 00:00:00 2022 431 2 Actually there are more than 2 types but I just put here 2 for simplify. For now I can view the trending of data for each type thanks to Trellis, by 7 days per week. But I want to have another view to have data display by each type , compare the same day of different weeks. Something like this: Do you have any idea please? Thanks, Julia
Hey Splunkers !! I'm Explore around summary index... Wanted to know how to extract timestamps for each event when importing data into the summary index in Splunk Enterprise 9.0.1..... check on the... See more...
Hey Splunkers !! I'm Explore around summary index... Wanted to know how to extract timestamps for each event when importing data into the summary index in Splunk Enterprise 9.0.1..... check on the documentation.. couldn't able to find any inputs... And if _time value of the event being summarized ... will the the _time value of the event be extracted even the _time is not included in search results....  
Hi, if I had logs as such: "Client authentication successful PAN-OS ver: 9.1.11-h3 Panorama ver:10.1.6-h3 Client IP: 10.68.196.211 Server IP: 10.58.217.123 Client CN: 013101004861" "Client aut... See more...
Hi, if I had logs as such: "Client authentication successful PAN-OS ver: 9.1.11-h3 Panorama ver:10.1.6-h3 Client IP: 10.68.196.211 Server IP: 10.58.217.123 Client CN: 013101004861" "Client authentication successful PAN-OS ver: 9.1.11 Panorama ver:10.1.6-h6 Client IP: 10.58.90.53 Server IP: 10.58.90.200 Client CN: 010401005346",   How can I extract BOTH the PAN-OS and Panorma ver, i.e, 9.1.11, 10.1.6-h6, 10.1.6-h3, 9.1.11-h3????   I tried the following but it doesn't work - | rex field=body "[Panorama][PAN-OS]\s*:(?<Software_Version>.+?) Client" Can you please help?
Hello All, It is with reference to the Logs ingestion of IIS server. I  have universal forwarder installed on the IIS server and is getting windows log. I want to ingest IIS logs. have downloaded... See more...
Hello All, It is with reference to the Logs ingestion of IIS server. I  have universal forwarder installed on the IIS server and is getting windows log. I want to ingest IIS logs. have downloaded  https://splunkbase.splunk.com/app/3185  and installed on search head was referencing https://docs.splunk.com/Documentation/AddOns/released/MSIIS/Setupaddon but it is showing invalid directory and i am stuck. My questions is where all i have to install Add on only Search head because that was mentioned in Splunk document      
Hello everyone! i have the following search:     index="xyz" "restart" | eval _time = strftime(_time,"%F %H:%M:%S") | stats count as "count_of_starts" values(_time) as "restart_time" by ... See more...
Hello everyone! i have the following search:     index="xyz" "restart" | eval _time = strftime(_time,"%F %H:%M:%S") | stats count as "count_of_starts" values(_time) as "restart_time" by host     now i get a table with the "host" "count_of_starts" "restart_time", but the time inside values is ordered like: 2022-09-22 12:19:22 2022-09-22 12:19:46 2022-09-22 15:02:12 2022-09-22 15:02:36 2022-09-23 11:00:51 2022-09-23 11:01:16 2022-09-23 15:18:10 2022-09-23 15:18:34 2022-09-23 15:35:47 2022-09-23 15:36:11 2022-09-23 16:15:05 2022-09-23 16:15:30 2022-09-24 09:47:43 2022-09-24 09:48:06 I need this results but in opposite order, how can i implement this? |sort - _time before or after stats doesn´t worked and | sort restart_time also didn´t affect the results. Thank you all in advance! Kind regards Ben
when i was learning splunk  i encountered following question: analyze  following SPL query * | outputlookup my dummy.cvs if no events were generated for the query, choose the appropriate option ... See more...
when i was learning splunk  i encountered following question: analyze  following SPL query * | outputlookup my dummy.cvs if no events were generated for the query, choose the appropriate option available answers : a.at least one event needs to be created to add results to mydummy b.my dummy .cvs will be created but the file will be empty c.outputlook cannot be used for queries which generate 0 results d. my dummy.cvs should be created before events can be added e. my dummy.cvs  will be created   im doubting between C or D , or am i totally wrong ?    
How to extract data from log message data using rex field=_raw? Sample data is Instance Name : ABCDEFGH1 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ampxwdp1o.pharma.aventis.com)(... See more...
How to extract data from log message data using rex field=_raw? Sample data is Instance Name : ABCDEFGH1 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ampxwdp1o.pharma.aventis.com)(PORT=12345))) Alias ABCDEFGH1 Uptime 4 days 6 hr. 39 min. 25 sec Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=113.09.126.234)(PORT=12345))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC12345))) The command completed successfully Instance Name : ABCDEFGH1TEMP Instance Name : ABCDEFGQ1 I need to extract Instance name, Alias Uptime  
In Monitoring Console in our Master, we have And I open it in search and got this query | rest splunk_server=test43 /services/server/introspection/queues | eval current_fill_perc = r... See more...
In Monitoring Console in our Master, we have And I open it in search and got this query | rest splunk_server=test43 /services/server/introspection/queues | eval current_fill_perc = round(current_size_bytes / max_size_bytes * 100, 0) | fields title, current_fill_perc | search title="parsingQueue.*" OR title="aggQueue.*" OR title="typingQueue.*" OR title="indexQueue.*" | rex field=title "(?<queue_name>^\w+)\.(?<pipeline_number>\d+)" | chart values(current_fill_perc) over pipeline_number by queue_name | fields pipeline_number, parsingQueue, aggQueue, typingQueue, indexQueue | rename pipeline_number as "Pipeline Number", parsingQueue as "Parsing Queue Fill Ratio (%)", aggQueue as "Aggregator Queue Fill Ratio (%)", typingQueue as "Typing Queue Fill Ratio (%)", indexQueue as "Index Queue Fill Ratio (%)" I want to create a dashboard for some members that don't have access in our Master server to see, but when I run that search in our Search head, it produce no result. So there is other way to show it in our SH?
What would be the regular expression when using rex to match fields that end with a range of values? Sample: "var0":0,"var1":10,"var2":20,"var10":100 I would like to extract fields from var1 to... See more...
What would be the regular expression when using rex to match fields that end with a range of values? Sample: "var0":0,"var1":10,"var2":20,"var10":100 I would like to extract fields from var1 to var10, and exclude var0. Thanks
As the title says, I am looking to setup retrospective searches based on new threat intelligence indicators in ES. Is it possible ? if yes, can someone please suggest the best way to do this ?
Hi, I have installed the IBM IIB agent (https://docs.appdynamics.com/appd/4.5.x/en/application-monitoring/install-app-server-agents/ibm-integration-bus-agent/install-the-iib-agent) and all appears t... See more...
Hi, I have installed the IBM IIB agent (https://docs.appdynamics.com/appd/4.5.x/en/application-monitoring/install-app-server-agents/ibm-integration-bus-agent/install-the-iib-agent) and all appears to be working ok. When I start the IIB Application however, I get an error "Agent license request denied. Agent type: WMB/IIB". I see nothing on the dashboard either. I am on the 16-day SAAS trial. Is the IIB agent not available to be used in the SAAS trial? Thanks,
I've been reading about the differences between forward indexes and inverted indexes.  Which model does Splunk use?  I have not been able to find that information in the documentation.
Hi, I am using the following rex command to extract all text in between "....device-group:" and "succeeded ...." for a field called "old" and assigning the extracting values to a new field called "n... See more...
Hi, I am using the following rex command to extract all text in between "....device-group:" and "succeeded ...." for a field called "old" and assigning the extracting values to a new field called "new". | rex field=old "device-group:\s*(?<new>\S+)" Currently, it is extracting all text in between "....device-group:" and "succeeded ...." EXCEPT for cases where there are multiple words with spaces. Examples include: 1) "Panorama push to device:013101009509 for device-group: Austin Cloud DMZ succeeded. JobId=2484595" where the extracted values should be "Austin Cloud DMZ " 2) "Panorama push to device:013101014290 for device-group: Austin Bank Segmentation succeeded. JobId=2482583" where the extracted values should be "Austin Bank Segmentation" Can you please help on extracting  such cases too? Thank you!
I have the csv file which has the below lines. ========================= METRIC_NAME,METRIC_UNIT,BEGIN_TIME,END_TIME,MAXVAL,MINVAL,AVERAGE Buffer Cache Hit Ratio,% (LogRead - PhyRead)/LogRead,... See more...
I have the csv file which has the below lines. ========================= METRIC_NAME,METRIC_UNIT,BEGIN_TIME,END_TIME,MAXVAL,MINVAL,AVERAGE Buffer Cache Hit Ratio,% (LogRead - PhyRead)/LogRead,09/25/2022 14:59,09/25/2022 15:59,100,0,100 Memory Sorts Ratio,% MemSort/(MemSort + DiskSort),09/25/2022 14:59,09/25/2022 15:59,100,0,100 Redo Allocation Hit Ratio,% (#Redo - RedoSpaceReq)/#Redo,09/25/2022 14:59,09/25/2022 15:59,100,0,100 User Transaction Per Sec,Transactions Per Second,09/25/2022 14:59,09/25/2022 15:59,1.383,0,.528 Physical Reads Per Sec,Reads Per Second,09/25/2022 14:59,09/25/2022 15:59,1.05,0,.138 Physical Reads Per Txn,Reads Per Txn,09/25/2022 14:59,09/25/2022 15:59,2.296,0,.223 -----160 lines I have modified the header , inputs.conf &props.conf but still the mentioned error has seen.   Inputs.conf: [monitor:///u01/app/oracle/scripts/Performance_metrics/output] disabled = false index=brm_db initCrcLength=2048 sourcetype = csv crcSalt=<SOURCE>   Props.conf: [csv] SHOULD_LINEMERGE = False pulldown_type = true INDEXED_EXTRACTIONS = csv KV_MODE = none category = custom HEADER_FIELD_ACCEPTABLE_SPECIAL_CHARACTERS=_ HEADER_FIELD_DELIMITER=, FIELD_DELIMITER=, description = Comma-separated value format. Set header and other settings in "Delimited Settings" Can someone help with this as i need to complete it ASAP. Stuck for almost 2 days with this. Quickly help can be appreciated.  
Hello, I found a new failure of an App's Setup page (secret storage) on a Splunk Cloud when I tried to register password to secret storage. [SPLUNKD] You (user=<my splunk user>) do not have permissi... See more...
Hello, I found a new failure of an App's Setup page (secret storage) on a Splunk Cloud when I tried to register password to secret storage. [SPLUNKD] You (user=<my splunk user>) do not have permission to perform this operation (requires capability: $db_connect_read_app_conf$). Could this be just an account permissions issue ? Can it be proceeded by using admin account  ? Do you have any information for this db_connect_read_app_conf Splunk capability ?? Unfortunately, there was no valuable document and reference for this capability in my search. I'm happy for any information from you. Thank you in advance.
Hi, I have rows that are json based. each row has a field that looks like this: { "students" : [ {"id":"123", "name":"abc"}, {"id":"456", "name":"def"}, {"id":"789", "name":"h... See more...
Hi, I have rows that are json based. each row has a field that looks like this: { "students" : [ {"id":"123", "name":"abc"}, {"id":"456", "name":"def"}, {"id":"789", "name":"hij"} ], "student_id" : "456" } each row can have multiple students and always just one student_id. for each row I want to extract the name of the student who's id is equal to the student_id. how can I do that?   I tried this,: |spath path=students{} output=students|mvexpand students | spath input=students|multikv| table id, name, student_id I do get 3 rows like this in the result: id name student_id 123 abc 456 456 def 456 789 hij 456   but when I try to filter the matching row with: | where id = student_id I get 0 rows coming back.   TIA Asaf
New install and new to splunk.  SC4S issue.  Professional Service left earlier in the week.  Firewall and panorama log sources. Only the firewall logs going to the right index as specified in the spl... See more...
New install and new to splunk.  SC4S issue.  Professional Service left earlier in the week.  Firewall and panorama log sources. Only the firewall logs going to the right index as specified in the splunk_metadata.csv.  I can see the panorama logs in tcpdump but they aren't in the firewall index and aren't in lastchanceindex.  pan_panos_log,index,firewall pan_panos_globalprotect,index,firewall pan_panos_traffic,index,firewall pan_panos_threat,index,firewall pan_panos_system,index,firewall pan_panos_config,index,firewall pan_panos_hipmatch,index,firewall pan_panos_correlation,index,firewall  
I have created custom command *| cloudcidrlookup cloud=azure* but how to change it to be just *| cloudcidrlookup azure* ?     @Configuration() class CloudCidrLookup(GeneratingCommand): clou... See more...
I have created custom command *| cloudcidrlookup cloud=azure* but how to change it to be just *| cloudcidrlookup azure* ?     @Configuration() class CloudCidrLookup(GeneratingCommand): cloud = Option(require=False) ... def generate(self): if self.cloud=='azure': ... dispatch(CloudCidrLookup, sys.argv, sys.stdin, sys.stdout, __name__)