All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , I need a query for including non business hours and weekends
We have many Cisco Meraki devices sending data via syslog to Splunk. Is there an add-on for the Cisco Meraki devices, to extract the fields from the events. One more thing. do the Meraki devices sup... See more...
We have many Cisco Meraki devices sending data via syslog to Splunk. Is there an add-on for the Cisco Meraki devices, to extract the fields from the events. One more thing. do the Meraki devices support json? 
I want my search to consider a 5 minute timeframe. I have a stats with a bin for a span of 5 minutes but when running it sometimes it is split into two 5 minutes intervals. I want it to only consider... See more...
I want my search to consider a 5 minute timeframe. I have a stats with a bin for a span of 5 minutes but when running it sometimes it is split into two 5 minutes intervals. I want it to only consider 1 interval of 5 minutes. So right now I would snap to say 1:00-1:05 and 1:05-1:10. I would like it to just do something like 1:03-1:08; really whatever time it runs on I want that 5 minute span to be treated as one result set.  
My UF-HF-Indexers is working great however I need to add a HF-HF-Indexer as well The first HF sends to other HF but is not indexed and the UF attached to HF1 is not showing up at all    is ther... See more...
My UF-HF-Indexers is working great however I need to add a HF-HF-Indexer as well The first HF sends to other HF but is not indexed and the UF attached to HF1 is not showing up at all    is there something different you have to do for a HF-HF-INDEXER than a UF-HF-INDEXER 
I'm using Splunk Python SDK to download a search result as a CSV file. The output file contains a header row if the search returns one or more events. When there is no events from search, the CSV f... See more...
I'm using Splunk Python SDK to download a search result as a CSV file. The output file contains a header row if the search returns one or more events. When there is no events from search, the CSV file generated is empty, without hearder row. As a requirement, I need all generated CSV files to contain at least the header row even though the search does not return any events.  
Hi, I have a search query which returns multiple values. For example, the search query returns abc, def, ghi. I need to take this as input and  i need to perform a search of these values. The log... See more...
Hi, I have a search query which returns multiple values. For example, the search query returns abc, def, ghi. I need to take this as input and  i need to perform a search of these values. The logs contains the abc-123-678, def-678+943 , ghi-678-123 and i need to search the events that contains these strings. Any suggestions?    
Hi, I have mail server logs where each mail has the MID number as identifier (for that mailserver =host, for that day) MID 1234567 From:  someone1@domain.do MID 1234567 To: someone3@gmail.com MID... See more...
Hi, I have mail server logs where each mail has the MID number as identifier (for that mailserver =host, for that day) MID 1234567 From:  someone1@domain.do MID 1234567 To: someone3@gmail.com MID 1234567 Subject: ... MID 1234567 ....  I'm trying to find the To with the subsearch and extract the host and MID values.  For using MID only it working perfectly, however it is not fail safe (it might happen that more than one mail server might have the exact same MID on the same day)   index="mail" "MID" [search index="mail" "MID" ("someone1@domain.do" OR "someone2@othdomain.nu")|rex "MID (?<MID>\d+)"|dedup MID|fields + MID|rename MID as query]   This works perfectly. Now I wanted to add the host variable for get string pairs to search for. Important that I want the result as string without variable names: This is what I've tried:    index="mail" "MID" [search index="mail" "MID" ("someone1@domain.do" OR "someone2@othdomain.nu")|rex "MID (?<MID>\d+)"|dedup host MID|fields host MID|format "(" "(" "AND" ")" "OR" ")"|rename output as query]   EDIT/REMARK: I've tried to combine the "host" and "MID" variables into "output" in some way, but it just did not work. that is the reason for this non-functioning rename at the end..  However seems the variable names are there. Could you please help how to remove both variable names or at least for the "MID" ? (Interested in both solution, but any good solution is perefectly fine) EDIT1: Checking the inner search result, because the whole search just not working due to this problem. EDIT2: Have tried to parse the whole output line to variable then replace with either "rex mode=sed" or with "replace" in two way, however seems I can't get the formatted output to variable anymore. (this is the inner query only)   index="mail" "MID" [search index="mail" "MID" ("someone1@domain.do" OR "someone2@othdomain.nu")|rex "MID (?<MID>\d+)"|dedup host MID|fields host MID|format "(" "(" "AND" ")" "OR" ")"|rex "^(?<output>.*$)"|eval output=replace(output,"MID=","") |rename output as query    
I need help on how to create a WIDS/IDPS/Internet Content Filtering dashboard in Splunk so that I can continuously monitor the web traffic or pull up reports when asked.
All, I am attempting to export BT Detection rules from one application to another, and the Config Exporter tool indicates that it was completed successfully, but the rules never arrive in the desti... See more...
All, I am attempting to export BT Detection rules from one application to another, and the Config Exporter tool indicates that it was completed successfully, but the rules never arrive in the destination application's BT Detection Rules list.  Can someone step through the process of exporting BT Detection rules from one app to another? ^ Post edited by @Ryan.Paredez to move this reply to its own post. If a post is more than a year old, it's best to create a new post as it's unlikely older threads will get a reply.
I was working in the MLTK, very new to it and exploring. I was working to establish a few searches where I will fit a algorithm and then apply it to identify if any values out of a set boundary and t... See more...
I was working in the MLTK, very new to it and exploring. I was working to establish a few searches where I will fit a algorithm and then apply it to identify if any values out of a set boundary and then alert on that. I have two question from this.   Is this a valid use case or not so much? I have a predicted value after my fit but, its too close to my actual values so I was thinking of doing something like(+ or - depending on need): eval bound = (predictedavg - (stdev * 3))  Would it be more beneficial to calculate this in the fit search or when applying the model?
Is there a way to customize which additional fields to show for which Notable event /Co-relation search without affecting other notable events that may be displaying the same additional fields? If so... See more...
Is there a way to customize which additional fields to show for which Notable event /Co-relation search without affecting other notable events that may be displaying the same additional fields? If so, please help.
So here is what I am thinking. In a nutshell, I am looking to have a main dashboard which displays the status of multiple servers as well as display a status indicator for each of them to show the se... See more...
So here is what I am thinking. In a nutshell, I am looking to have a main dashboard which displays the status of multiple servers as well as display a status indicator for each of them to show the servers current "health". This "health" value would be derived ideally by a secondary dashboard which has multiple status indicators which work in a similar fashion, but are tied to one specific servers sub-systems. I may also incorporate another level that goes into the sub-systems similarly. However, lets just say I am only doing the two levels. I am struggling to find a way to make the second sub-system level dashboards for each server, store and send their health values to the main dashboard so the color of the main dashboards status indicator can be determined for that specific server.  The intent is to have these dashboards on different screens throughout the office but also make following the issues easier. As in you just follow the drilldowns of the indicators showing an issue until you get to a diagnostic dashboard of some sort which can aid in troubleshooting. Some Ideas: Idea One: Have a JavaScript script for each dashboard which uses the Search Heads file structure to communicate between dashboards.  Problems with Idea One: There are many, though I am unsure if I can make the scripts update regularly and communicate while also updating the status indicators on each dashboard at the same time. Also, I am unsure of how to use the search of each of the indicators in a way that will work with the scripts. Even if I make all of this work it may also involve a decent amount of upkeep to keep all the pieces working together. Idea Two: Use Tokens or something similar to share the health values from the lower levels of dashboard up to the Main dashboard. Problems with Idea Two: Put simply I don't know if this is possible. So far as I can tell Tokens are meant to be used as inputs only, can't be changed by search results, and are meant to be sent downstream via drilldowns. Not upstream. Though maybe I could just start at the lowest level of dashboard and create drilldowns that will eventually end up back at the Main dashboard. Though this has its own issues such as needing multiple drilldowns to converge their data onto the same dashboard to get the level of information I would need to calculate an average health of the system.  I am quite open to suggestion. I have spent quite a while brainstorming and attempting to test different ideas to no avail. I am using Splunk Enterprise version 8.1.2. Please let me know if you have any questions. Thank you for taking the time to read this.  
we are using splunk cloud trying to monitor URL"s using website monitoring app but while checking its not showing latest data and last_checked is not updating. is there anything configuration set... See more...
we are using splunk cloud trying to monitor URL"s using website monitoring app but while checking its not showing latest data and last_checked is not updating. is there anything configuration settings i need to update for this Website MonitoringVersion: 2.9.1 Build: 1579823072
Hello, I have a question regarding the indexing of search results. So, I have an alert that's currently active performing and search and passing the results to a particular event through log events... See more...
Hello, I have a question regarding the indexing of search results. So, I have an alert that's currently active performing and search and passing the results to a particular event through log events, I would like to modify this job to run in a specific past time window, however I can't edit the job so I would like to be able to run the same search through the splunk search bar and pass the results to the index. I can run the search and get the results through the search but can't output it to the index. Is there a command that I can add to the search query in order to pass the results to the index? Thanks in advance.
I did not received the expected result on the below query, could anyone please check. https://community.splunk.com/t5/Splunk-Enterprise/How-to-add-colour-feature-in-dashboard/m-p/603148#M12979 ... See more...
I did not received the expected result on the below query, could anyone please check. https://community.splunk.com/t5/Splunk-Enterprise/How-to-add-colour-feature-in-dashboard/m-p/603148#M12979 In the below dashboard table, I need to set colour condition of 2 columns that is is expected difference and sla_difference. if expected_difference Is negative it should show in red colour if it is positive it should show in green colour. same as for sla_difference if it is negative it should be orange if it is positive it should show in green.    
Hi, We have 3 search head in cluster environment under a load balancer.  We are observing that one of the search head (non caption) CPU utilization is very high in compression to other 2 search h... See more...
Hi, We have 3 search head in cluster environment under a load balancer.  We are observing that one of the search head (non caption) CPU utilization is very high in compression to other 2 search head. Can anyone please suggest that why this is happening and how to troubleshoot this? Thanks.  
Hi,  Unfortunately I inherited a Splunk deployment where the previous admin co-located multiple roles to one Splunk host.   The admin put Deployment Server, SHC Deployer, and Monitoring Console role... See more...
Hi,  Unfortunately I inherited a Splunk deployment where the previous admin co-located multiple roles to one Splunk host.   The admin put Deployment Server, SHC Deployer, and Monitoring Console roles all on a single box (on prem).  In order to update the Deployment Server, Support told me I need to first remove the MC and Deployer roles.   I can move the MC no problem, but moving the SHC Deployer is causing some concern.   The deployer is set to the standard "merge_to_default" but I not sure how to copy over the files to the new deployer.  I know I need to make sure all the apps on the current get moved over, but what about the local settings created by the user?    Support says having DS and Deployer roles on same box is not supported, which I agree, but I am not getting any guidance from them. Any advice is greatly appreciated.    Thank you    
Hi everyone, I have observed that some of my lookup files that are intended to get updated on daily basis by reports, does not always have latest data. I have used 2 approaches so far: 1) Used rep... See more...
Hi everyone, I have observed that some of my lookup files that are intended to get updated on daily basis by reports, does not always have latest data. I have used 2 approaches so far: 1) Used report add action feature to add data to lookup files. 2) Used Outputlookup command with append. In both the cases, I have scheduled them to run on daily basis. But have observed that my lookup always do not gets updated (appended) with daily chunk of data. I have verified by running individual searches for the data availability for those particular days for which lookups were not added with data. Can someone please help me in understanding at the possible cause behind this. Thanks in advance.
Hi , Below is my dashboard :   I want to clear selected Dropdown value for dropdown "Operation" and reset it to default value on selection of another dropdown named "Method" . My dropdow... See more...
Hi , Below is my dashboard :   I want to clear selected Dropdown value for dropdown "Operation" and reset it to default value on selection of another dropdown named "Method" . My dropdown population code as below : <input type="dropdown" token="token_method" searchWhenChanged="true"> <label>Select Method:</label> <fieldForLabel>METHOD</fieldForLabel> <fieldForValue>METHOD</fieldForValue> <search> <query>| makeresults | eval API="party_interaction_rest",METHOD="Alle,GET,POST" | append [| makeresults | eval API="ticket_mgmt_rest",METHOD="Alle,GET,POST,PATCH"] | append [| makeresults | eval API="customer_management_rest",METHOD="Alle,GET,PATCH"] | append [| makeresults | eval API="agreement_management_rest",METHOD="Alle,GET"] | append [| makeresults | eval API="product_order_rest",METHOD="Alle,GET,POST,PATCH,DELETE"] | append [| makeresults | eval API="product_inv_rest",METHOD="Alle,GET,POST,PATCH"] | eval METHOD=split(METHOD,",") |mvexpand METHOD| table API METHOD | search API="$token_service$"</query> </search> <change> <condition value="Alle"> <set token="token_method">*</set> </condition> </change> <default>Alle</default> <initialValue>Alle</initialValue> </input> <input type="dropdown" token="tkn_OPERATION"> <label>Select Operation:</label> <fieldForLabel>OPERATION</fieldForLabel> <fieldForValue>OPERATION</fieldForValue> <search> <query>| inputlookup append=t REST_OPERATION_LOOKUP.csv where API="$token_service$" METHOD="$token_method$" | dedup OPERATION</query> </search>
I have a serialized json string like below  "{\n \"ID\": \"da419500-f6b4-11ec-8b49-025041000001\",\n \"Name\": \"splunk\",\n \"message\": \"demo\",\n \"tracePoint\": \"START\",\n \"priority\": \"ERR... See more...
I have a serialized json string like below  "{\n \"ID\": \"da419500-f6b4-11ec-8b49-025041000001\",\n \"Name\": \"splunk\",\n \"message\": \"demo\",\n \"tracePoint\": \"START\",\n \"priority\": \"ERROR\",\n \"flowName\": \"demo1\",\n \"timestamp\": \"2022-06-27T16:33:17.175289Z\",\n \"content\": {\n \"payload\": {\n \"message\": \"Hello world!\"\n }\n }\n}" I need to remove all the "\n" form this string  "{ \"ID\": \"da419500-f6b4-11ec-8b49-025041000001\", \"Name\": \"splunk\", \"message\": \"demo\", \"tracePoint\": \"START\", \"priority\": \"ERROR\", \"flowName\": \"demo1\", \"timestamp\": \"2022-06-27T16:33:17.175289Z\", \"content\": { \"payload\": { \"message\": \"Hello world!\" } } }"  I tried rex field=myField mode=sed "s/\\n/ /g"  and also replace function both seems to be not helping