All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to create a saved search that runs everyday and exports the results in a csv, what is a one line curl command that would allow me to do this?
I have set up a new splunk test environment with search head cluster (3 SH) and index cluster (2 IDX). Also added Splunk_SA_CIM first in version 4.18, in my latest test version 4.20.2. Splunk is wo... See more...
I have set up a new splunk test environment with search head cluster (3 SH) and index cluster (2 IDX). Also added Splunk_SA_CIM first in version 4.18, in my latest test version 4.20.2. Splunk is working fine, acclerated DM are working, which means they are searchable. After installing the sophos Central app https://splunkbase.splunk.com/app/6186/ I'm not able to search in my datamodel: | datamodel Authentication search   More simple: searching with tag is not working, index=* tag=authentication has the same error. Tested on a single splunk without problems. ??
After installing microsoft windows add on I could not see applicable tags for network resolution data model with respect to DNS logs. Why I could not see any tag? Any thoughts!
Hi. I have a developer, that works on a local app and maintains the source in Azure Dev/Ops. He is, of course, interested in automating the complete deployment process to our on-prem search head. ... See more...
Hi. I have a developer, that works on a local app and maintains the source in Azure Dev/Ops. He is, of course, interested in automating the complete deployment process to our on-prem search head. Has anyone tried this setup, and found some good practises? For some reason I'm not comfortable with having a dev/Ops agent running on the search head, where it potentially could have access to the filesystem etc. Any advise is greatly appreciated   Kind regards
The search you ran returned a number of fields that exceeded the current indexed field extraction limit='200' To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_l... See more...
The search you ran returned a number of fields that exceeded the current indexed field extraction limit='200' To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_limit to a number that is higher than the number of fields contained in the files that you index. Hi, I am getting above error while on the left side I have only 35-10 fields extracted during search time. Log is ingested with Splunk HEC using Splunk_TA_nix with linux_secure stanza. How can I detect what is causing above error as didn't find anything that will create indexed fields, etc...and I didn't see fields on the left created. How to troubleshoot this? With search like this, I got 11 fields | walklex index="<index_name>" type=field | search NOT field=" *" | stats list(distinct_values) by field
Im new to splunk ,  I created 15 users and had failed login attempts on some of them. how can i find the first 10 failed login attempts,with what command can i see this in splunk sourcetype="WinEv... See more...
Im new to splunk ,  I created 15 users and had failed login attempts on some of them. how can i find the first 10 failed login attempts,with what command can i see this in splunk sourcetype="WinEventLog:Security" eventcode 4625| top limit=10 "Account Name" I tried it brought all users but how do I integrate the failed part into it, am I walking on the wrong path?
Hi every one  I have some difficulty to count my consumedHostUnits  I have this commande :  index="dynatrace_hp" | search endpoint="infrastructure/hosts" | stats distinct_count(discoveredName) c... See more...
Hi every one  I have some difficulty to count my consumedHostUnits  I have this commande :  index="dynatrace_hp" | search endpoint="infrastructure/hosts" | stats distinct_count(discoveredName) count(consumedHostUnits) by "managementZones{}.name" | search "managementZones{}.name"="[Env]*" But the results d'ont returne the good information  ( i would like to have the total consumedhostUnis for all the host in a managementZone) Thx for you Help ! 
I have data in source which shows Y/N for fields investor, borrower, guarantor, benefic for each customer. Need to show a pie chart that shows count of investor, borrower, guarantor, benefic that are... See more...
I have data in source which shows Y/N for fields investor, borrower, guarantor, benefic for each customer. Need to show a pie chart that shows count of investor, borrower, guarantor, benefic that are Y based on the customer ID. Like if a customer has only investor = Y, then investor count will be 1 If a customer has all the fields = Y, then all the params count will be 1 only. Below is the data snip index="customerdata" | table CUS_CID_CUST_ID,CUS_IND_INVESTOR, CUS_IND_BORROWER, CUS_IND_GUARANTOR, CUS_IND_BENEFIC User wants to see like this  
Hi there. I was wondering... All the docs and howtos regarding index-time extractions say that you need to set field to indexed in fields.conf. Fair enough - you want your field indexed, you make i... See more...
Hi there. I was wondering... All the docs and howtos regarding index-time extractions say that you need to set field to indexed in fields.conf. Fair enough - you want your field indexed, you make it an indexed field. That's understandable. But what really happens if I do a index-time extraction (and/or an ingest-time eval) producing a new field but I don't set it as indexed one? Does the indexer/HF simply parses the field, uses it internally in the parsing queue but doesn't pass it downstream into indexing? Or is it parsed down into indexing but ignored there? Or is it sent to indexing and indexed but if search-head(s) don't know it's indexed, it's not used in search? Or any other option?
I have this query where I need to use stats to aggregate the results based on account_number.   Now, some of the results are multivalued. I need the output to be like a table, one values for each ro... See more...
I have this query where I need to use stats to aggregate the results based on account_number.   Now, some of the results are multivalued. I need the output to be like a table, one values for each row.  like somehow expand the aggregated results into a tabular  format. ========================================= `index_list` account_type="Service Account" | addinfo | eventstats dc(sourcetype) as dc_sourcetype by service_number | where dc_sourcetype>1 | stats values(is_interactive) as is_interactive, values(account_name) as account_name, values(full_name) as full_name, values(email_address) as email_address, values(manager_name) as manager_name, values(service_account_name) as service_account_name, values(account_type) as account_type, values(service_account_id) as service_account_id, values(au_owner_name) as au_owner_name, values(au_owner_email) as au_owner_email BY account_number ======================================= account_type, service_account_id, account_number are multivalue fields.
I want to display the maixmum percentage and the mounted but I do not know the command. because the file is not in csv. It is a txt file and I use multikv to extract the field.  
Hi, What are the 4 important attributes to be considered under distsearch.conf
Hi, I am providing sample data below: [2021-12-07 03:50:14,666] {{taskinstance.py:1532}} INFO - Marking task as FAILED. dag_id=any_bash_command_dag, task_id=bash_command, execution_date=20211207T03... See more...
Hi, I am providing sample data below: [2021-12-07 03:50:14,666] {{taskinstance.py:1532}} INFO - Marking task as FAILED. dag_id=any_bash_command_dag, task_id=bash_command, execution_date=20211207T035010, start_date=20211207T035013, end_date=20211207T035014 [2021-12-08 01:02:14,491] {{taskinstance.py:1192}} INFO - Marking task as SUCCESS. dag_id=Parent_dag, task_id=trigger_archive_files_dag, execution_date=20211207T000000, start_date=20211208T010213, end_date=20211208T010214 SPL: index=cloud sourcetype=lambda:Airflow2Splunk "\"logGroup\"" "\"airflow-OnePIAirflowEnvironment-DEV-Task\"" "Marking task as*" dag_id=* | rex field=_raw "task_id=(?P<task_id>\w+)" | table _time dag_id task_id | sort _time Current Results in tabular form: _time                                                         dag_id                                                    task_id                                           Task_Status -------------------------------------------------------------------------------------------------------------------------------------- 2021-12-06 22:50:14.756               any_bash_command_dag              bash_command                          2021-12-07 20:02:14.626               Parent_dag                                            trigger_archive_files_dag Expected results in tabular form: _time                                                         dag_id                                                    task_id                                           Task_Status -------------------------------------------------------------------------------------------------------------------------------------- 2021-12-06 22:50:14.756               any_bash_command_dag              bash_command                         Failed 2021-12-07 20:02:14.626               Parent_dag                                            trigger_archive_files_dag      Success Can you please help me in modifying the SPL above which should result an additional column "Task_Status" and the values "Failed" for dag_id= any_bash_command_dag and "Success" for dag_id=Parent_dag? Thanks, Sumit
Hello, We have a usecase where the Json payloads are more than 1 million bytes, our current truncation limit is set to default 10000 bytes, we don't have an option to set this to 0 since our env is ... See more...
Hello, We have a usecase where the Json payloads are more than 1 million bytes, our current truncation limit is set to default 10000 bytes, we don't have an option to set this to 0 since our env is splunk cloud and splunk support didn't agreed to set it to 0, is there an alternative way to overcome this situation? Please let me know if anyone ever faced this situation   Thanks  
how to get back an administrator role. and solve the above error  
I've created a custom dashboard for one of our clients and each of the entities that I'm reporting on a pie chart displays as such: {className}.{methodName} Since the class name is the same for eac... See more...
I've created a custom dashboard for one of our clients and each of the entities that I'm reporting on a pie chart displays as such: {className}.{methodName} Since the class name is the same for each of the entities that are displayed, the client wants to know if it's possible to trim off className for each of these entities as it isn't very aesthetically pleasing. I'm currently just displaying the entity using ${e} under the Advanced options for Metric Display names and it appears that it won't evaluate regular expressions. Do I have any options to trim this display name or am I doomed? Any advice you can provide will be extremely helpful!
Hi everyone,  Recently, I have tried to install the OCI addon in a test enviroment but it does not work. According to docs that are posted on https://github.com/oracle-quickstart/oci-arch-logging-sp... See more...
Hi everyone,  Recently, I have tried to install the OCI addon in a test enviroment but it does not work. According to docs that are posted on https://github.com/oracle-quickstart/oci-arch-logging-splunk I contacted to Oracle Support and then they sent a link with the addon and i followed the steps that are specified in the previous link. Unfortunately  I can not lunch the add-on because OCI addon is not visible ( I tried to change the add-on to visible but when I lunch the add-on appears a horse saying "oops").   Has anyone achieved integrate OCI in Splunk?
At App Inspect Report the following error comes out for my application: check_reload_trigger_for_all_custom_confs Custom conf file inputs_templates.conf does not have a reload trigger in app.conf... See more...
At App Inspect Report the following error comes out for my application: check_reload_trigger_for_all_custom_confs Custom conf file inputs_templates.conf does not have a reload trigger in app.conf. Without a reload trigger the app will request a restart on any change to the conf file, which may be a negative experience for end-users. My app does indeed have a .conf template file under default, named inputs_templates.conf. Under this same folder (/default), the following is the content of my app.conf:     [default] [install] is_configured = 0 [launcher] author = DATAPLUS - Altin Karaulli description = Omega Core Audit for Oracle App for Splunk is a security solution for Oracle databases. version = 1.8.2 [package] check_for_updates = 1 id = omega_core_audit [ui] label = Omega Core Audit for Oracle is_visible = 1 [triggers] reload.inputs_templates.conf = simple   Note the reload.inputs_templates.conf = simple under [triggers]. So why the failure by App Inspect ? best regards Altin
i am tryring to add a base search for a query and use the same for drop down and table. I see after adding search  adding to base search , panel table shows no results. my expectation is based on on... See more...
i am tryring to add a base search for a query and use the same for drop down and table. I see after adding search  adding to base search , panel table shows no results. my expectation is based on one of drop down selection i need my table to updated with that selected value. please help <form version="1.1"> <label>RH_Release_Information</label> <description>FC-002 release info</description> <search id="base_search"> <query> index="wtqlty" source=pdf_07 sourcetype="release_pdf_results_json" | table pdf_name, pdf_state, main_line, req_report, patch_name, patch_tag, started_on, Stream_start, Handover, planned_stopped_on, fco_state, snapshot, stakeholders.project_leader.name, stakeholders.developer.name, air_issues{}.short_description , Quality, description | rename pdf_name AS PDF, pdf_state AS "PDFState", fco_state AS FCOState, main_line AS "Mainline", patch_name AS Project, patch_tag AS Tags, started_on AS "PDF start", planned_stopped_on AS "Planned Stop", stakeholders.project_leader.name AS PL, stakeholders.developer.name AS Developer, air_issues{}.short_description AS Description, description AS Questionnaire | search PDFState = "$form.PDFState$" PL ="$form.PL$" FCOState = "'$form.FCOState$" </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time" searchWhenChanged="true"> <label>Time</label> <default> <earliest>0</earliest> <latest></latest> </default> </input> <input type="dropdown" token="pdf_state_token"> <label>PDFState</label> <choice value="*">All</choice> <fieldForLabel>PDFState</fieldForLabel> <fieldForValue>PDFState</fieldForValue> <search base="base_search"> <query>| dedup PDFState | top PDFState</query> </search> <default>*</default> </input> <input type="dropdown" token="main_line"> <label>Assembly</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Mainline</fieldForLabel> <fieldForValue>Mainline</fieldForValue> <search base="base_search"> <query>| dedup Mainline | top Mainline</query> </search> </input> <input type="dropdown" token="fco_state"> <label>FCOState</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>FCOState</fieldForLabel> <fieldForValue>FCOState</fieldForValue> <search base="base_search"> <query>| dedup FCOState | top FCOState</query> </search> </input> <input type="dropdown" token="snap_shot"> <label>ServicePacks</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>snapshot</fieldForLabel> <fieldForValue>snapshot</fieldForValue> <search base="base_search"> <query>| dedup snapshot | stats by snapshot</query> </search> </input> <input type="dropdown" token="PL"> <label>PL</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>PL</fieldForLabel> <fieldForValue>PL</fieldForValue> <search base="base_search"> <query>dedup PL | stats by PL</query> </search> </input> <input type="dropdown" token="developer"> <label>Developer</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Developer</fieldForLabel> <fieldForValue>Developer</fieldForValue> <search base="base_search"> <query>| dedup Developer | stats by Developer</query> </search> </input> <input type="text" token="main_line" depends="$hidden$"> <default>*</default> </input> </fieldset> <row> <panel> <title>RH_Release_Information</title> <html> <style> #tableRowColorWithoutJS table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <table> <search base = "base_search"> <query> </query> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="FCOState"> <colorPalette type="expression">case (match(value,"DRAFT_DEV"), "#DC4E41",match(value,"ACCEPTED"),"#53A051"</colorPalette> </format> <format type="color" field="PDFState"> <colorPalette type="expression">case (match(value,"handover_to_integration"), "#DC4E41",match(value,"accepted"), "#53A051", like(value,"integrated"), "#FFFF00", true(),"#C3CBD4")</colorPalette> </format> <drilldown> <condition field="PDF State"> <set token="form.pdf_state_token">$click.value2$</set> </condition> <condition field="PL"> <set token="form.PL">$click.value2$</set> </condition> <condition field="FCOState"> <set token="form.fco_name">$click.value2$</set> </condition> <condition field="Developer"> <set token="form.developer">$click.value2$</set> </condition> <condition field="Snapshot"> <set token="form.snap_shot">$click.value2$</set> </condition> <condition field="Mainline"> <set token="form.main_line">$click.value2$</set> <link target="_blank">https://at.patchtooling.asml.com/pdf/RH/ML/patches/$click.value2$/</link> </condition> <condition field="Project"> <set token="form.patch_name">&gt;$click.value2|n$</set> <link target="_blank">https://stream-dashboard.asml.com/db/overall/$click.value2$/</link> </condition> <condition field="PDF"> <set token="form.pdf_name">$click.value$</set> <link target="_blank">https://at.patchtooling.asml.com/pdf/RH/ML/patches/$row.Project|n$/</link> </condition> <condition field="req_report"></condition> <condition field="PDF start"></condition> <condition field="Stream_start"></condition> <condition field="Handover"></condition> <condition field="Planned Stop"></condition> <condition field="Description"></condition> <condition field="Quality"></condition> <condition field="Questionnaire"></condition> </drilldown> </table> <html depends="$form.show_clear_filter$"> <p> <a href="#" data-set-token="form.main_line" data-value="*" data-unset-token="form.show_clear_filter" class="btn btn-secondary clear-filter">Clear Filter</a> </p> </html> <html depends="$stayhidden$"> <style> #id-of-panel div.multivalue-subcell { text-decoration: line-through !important; } #id-of-panel div.multivalue-subcell[data-mv-index="0"] { text-decoration: none !important; } </style> </html> </panel> </row> </form>    
Hello, I am planning to setup some custom metrics indexes using this guide: https://docs.splunk.com/Documentation/ITSI/4.10.2/Entity/CustomIndexes First question: There are two linux metrics macros... See more...
Hello, I am planning to setup some custom metrics indexes using this guide: https://docs.splunk.com/Documentation/ITSI/4.10.2/Entity/CustomIndexes First question: There are two linux metrics macros. I am planning to modify both to be safe but I assume I only need to modify the ta one since that is what I am using to collect data. What is the other one for? 1. itsi_entity_type_nix_metrics_indexes 2. itsi_entity_type_ta_nix_metrics_indexes   Second question: I did a ctrl+f on itsi_im_metrics and noticed that the itsi_im_metrics_indexes macro uses a similar definition to the itsi_entity_type macros that I listed above. I cannot find any information about this macro. Do I need to modify it? What is it used for? Thank you!