All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have an alert setup which reads a lookup file (populated by another report) and if there are any records in the lookup file, emails should be triggered (one for each record).   I understand... See more...
Hello, I have an alert setup which reads a lookup file (populated by another report) and if there are any records in the lookup file, emails should be triggered (one for each record).   I understand this can be done using trigger "for each result" but I want to use some field values from each record and use it as an email subject. Example: in this case, I want 6 emails to be triggered with subject lines as, Email 1: Selfheal Alert - Cust A - Tomcat Stopped - Device A1- May-24 - Device Level Email 2: Selfheal Alert - Cust A - Tomcat Stopped - Device A2- May-24 - Device Level Email 3: Selfheal Alert - Cust B - Failed Job - Device B1- May-24 - Device Level Email 4: Selfheal Alert - Cust C - Tomcat Stopped - Device C1- May-24 - Device Level Email 5: Selfheal Alert - Cust C - Failed Job- Device C2- May-24 - Device Level Email 6: Selfheal Alert - Cust C - Failed Job - Device C3- May-24 - Device Level How can I achieve this? Thank you.
Dear All, I need help in integration an Openshift with our Splunk Enterprise I have  integrated Openshift with Splunk using HEC and the connection is successfully paired and when the test message w... See more...
Dear All, I need help in integration an Openshift with our Splunk Enterprise I have  integrated Openshift with Splunk using HEC and the connection is successfully paired and when the test message was sent from an Openshift we received on Splunk but we don't receive the logs constantly. We are able to see only test logs and after that there are no logs floating to Splunk. Can someone please guide me here.
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   ... See more...
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   However, it does not appear in the list of files in the “Lookup File” pull-down on the next Create New Lookup Definition screen. It takes time to set up because it appears after more than one day each time. Is this due to a limitation caused by the specifications? If you know the cause, please let us know.    2. no lookup The following CSV file is registered, and lookup definitions and automatic definitions are also set. 【lookup.csv】   PC_Name | Status | MacAddr1 | MacAddr2 ------------------------------------------------------------ PC_Name1 | Used | aa:bb:cc... | zz:yy:xx... PC_Name2 | Used | aa:bb:cc... | zz:yy:xx... PC_Name3 | Used | aa:bb:cc... | zz:yy:xx...   *MacAddr1 and MacAddr2 by Ethernet and WiFi Address, I want to refer to MacAddr2 as a key. The following fields are output in the target index log CL_MacAddr as defined in the calculated field I would like to reference the Mac address of this CL_MacAddr from lookup.csv and output PC_Name and Status as fields, but it is not working. For example, when I enter the following in the search screen, only the existing fields appear, not PC_Name, Status, etc. index=“nc-wlx402” sourcetype=“NC-WIFI-3” | lookup “lookup.csv” MACAddr2 AS CL_MacAddr OutputNew   However, another lookup definition is available for the same index and source type (automatic definition setting, confirmed operation). I'm assuming this is due to something basic... please help me
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 ... See more...
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 hours. 
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is th... See more...
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is there a possibility to use case and coalesce together?
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the c... See more...
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the contents of the SmartStore without corrupting any of the files that have been indexed.  What would be the best way to migrate from one SmartStore backend to another SmartStore backend without losing any data? 
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this ca... See more...
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this can be done??? Any inputs on this please???  
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode H... See more...
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode How to get this field values in a drop down????
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent ... See more...
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent queue at inputs,  enabled on certain UF/HF/IHF/IUF inputs, if downstream parsingqueue/indexqueue/tcpoutqueue are blocked/saturated and a DS bundle push triggers splunk restart, events will be dropped since UF/HF/IHF/IUF failed to drain queues. On windows DC, persistent queuing is enabled for windows modular inputs, DS bundle push triggers DC restart and still windows modular input events in parsingqueue/tcpoutqueue will be dropped. On windows DC, some windows event (event occurred while the workstation was being shut down ) logs are always lost. When Laptops are off the network and restarted/shutdown, in-memory queue events are dropped.  With PQ at inputs, during splunk restart on forwarding tier, still splunk in-memory queued events might get dropped.  Typical steps for laptop where events are always lost. 1. Splunk is installed on a Windows Laptop 2. Put the laptop to Sleep 3. The Splunk service will stop, then 4. There will be 1 or 2 Windows events such as 4634-Session_Destroyed. 5. Later the Laptop "wakes up" and there will be 1 or 2 events generated such as 4624-Login 6. Then Splunk service start. 7. The events that were created when sleep started and when sleep ended were not ingested.
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from cont... See more...
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from content. I've tried "spath path=log.content{}" and "spath path=log.content{}.status but it ends up empty. I want to be able to do a ternary operation on "status" like the sample below: | mvexpand log.content{}.status | eval Service=if('log.content{}.status'="CANCELLED", "Cancelled", if('log.content{}.status'="BAY", "Bay", NULL)) | where isnotnull(Service) | stats count by Service  
Hi Every1, Need help on how to integrate solarwinds to splunk cloud  or splunk enterprise ? As I seen addon is not support by splunk support. Suggest best possible ways !!
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. ... See more...
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. my search that looks like this: Index=a sourcetype=b earliest=-1d [| inputlookup M003_siem_ass_list where FMA_id=*OS -001* | stats values(ass) as search | eval seaqqrch=mvjoin(search,", OR ")] | fields ip FMA_id _time d_role | stats latest(_time) as _time values(*) by ip
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we c... See more...
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we can encounter some problems, like the Orphaned object one. So the best pratice is to create a service user and assign it to KO. Fine. My wondering is: suppose we have many scheduled correlation searches, for example more than 100 and 200. Assign all those searches to one single service user is fine, or is better to create multiple one, so to avoid some performance experience? The question is made based on a case some colleagues shared with me once: due there were some problems with search lag/skipped searches, in addiction to fix searches scheduler, involved people splitted their ownership to multiple users. Is that useful or not?
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer b... See more...
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer but I want to upgrade it via deployer what is the best practice and method to achieve this  Here is my plan , please correct me if I am thinking wrong  Step 1) First I will copy the installed folder from one of the SHC member to deployer under /etc/app so that it install itself on the deployer and then I can manually upgrade it using deployer GUI Step2) Once upgraded , I will copy upgraded app from /etc/apps folder to /etc/shcluster/apps folder  Step3) run apply shcluster-bundle on the deployer to push the upgraded app to SHC members . Do you think above is the right approach ? if not what else I can do   
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I h... See more...
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I have developed a private custom search command app that requires some user configuration. For this purpose, I've added a custom config file in the /etc/apps/<appname>/default directory. Additionally, I've configured the app.conf as follows:   [triggers] reload.<custom_conf> = simple [shclustering] deployer_push_mode = full   I've also included a server.conf inside etc/apps/<appname>/default with the following configuration: [shclustering] conf_replication_include.<custom_conf_name> = true When attempting to install this private app using the install_app_from_file option in a Splunk Cloud Victoria Experience with search head clusters behind a load balancer, it appears that the app configuration is not being replicated across search heads. Could someone please assist me in identifying if there's anything I'm missing or doing incorrectly? Thank you. Avnish
Hello. At splunk dashboard visualization charts that display data have their background color set to white (#FFFFFF) by default and it turns to black if I change theme from Light to Dark. I want to f... See more...
Hello. At splunk dashboard visualization charts that display data have their background color set to white (#FFFFFF) by default and it turns to black if I change theme from Light to Dark. I want to find a way to get the same color behavior for Rectangle element. Currently it's default fill color is grey (#c3cbd4) and it turns to dark-grey if I change theme from light to dark. If I change background color of rectangle in settings - it stops changing color when I change the theme. How to make Rectangle element at Splunk dashboard to be white for Light theme and black for Dark theme ? Thanks!  
Hello! I am having an issue getting annotations to work within the Dashboard Studio column chart. I have tried a bunch of different ways, but it isn't cooperating. The chart I have is just System_Na... See more...
Hello! I am having an issue getting annotations to work within the Dashboard Studio column chart. I have tried a bunch of different ways, but it isn't cooperating. The chart I have is just System_Name on the X axis and Risk_Score on the Y axis. I'd like to be able to highlight where the System_Name in question shows up on the chart as annotation examples have demonstrated in the documentation. My current code for the chart is as follows. Does anyone have any suggestions as to what I'm doing wrong here? Chart itself: { "type": "splunk.column", "options": { "seriesColorsByField": {}, "annotationColor": "> annotation | seriesByIndex('2')", "annotationLabel": "> annotation | seriesByIndex('1')", "annotationX": "> annotation | seriesByIndex('0')", "legendDisplay": "off" }, "dataSources": { "primary": "ds_abUJLKDj", "annotation": "ds_YPQ3EYqR" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }  Searches: "ds_abUJLKDj": { "type": "ds.search", "options": { "query": "`index` \n| stats latest(Risk_Score) AS Risk_Score by System_Name\n| eval Risk_Score=round(Risk_Score, 2)\n| sort Risk_Score" }, "name": "risk_score_chart" }, "ds_YPQ3EYqR": { "type": "ds.search", "options": { "query": "`index` \n| stats latest(Risk_Score) AS Risk_Score by System_Name\n| eval Risk_Score=round(Risk_Score, 2), color=\"#f44336\", Annotation_Label= (\"The risk score for $system_name$ is \" + Risk_Score) \n| sort Risk_Score\n| where System_Name = \"$system_name$\"\n| table System_Name, Annotation_Label, color" }, "name": "risk_score_chart_annotation"
Hi all, in the past I used a CLI command to disable indicators feature. do you know how can I enable it back?
Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settin... See more...
Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settings through a <change> line in each, then shows the necessary information in a line graph. That part is working perfectly, but what I'm trying to do is set the color of the line graph based on the system chosen, and I'm trying to keep is simple for future edits. I've set the colors I'm currently using in the <init> section as follows:   <init> <set token="red">0xFF3333</set> <set token="purple">0x8833FF</set> <set token="green">0x00FF00</set> </init>   The system selection looks like this:   <input token="system" depends="$NotDisplayed$"> <change> <condition value="System-A"> <set token="index_filter">index_A</set> <set token="display_name">System-A</set> <set token="color">$purple$</set> </condition> <condition value="System-B"> <set token="index_filter">index_B</set> <set token="display_name">System-B</set> <set token="color">$green$</set> </condition> <condition value="System-C"> <set token="index_filter">index_C</set> <set token="display_name">System-C</set> <set token="color">$red$</set> </condition> </change> </input>     I now have a single query window putting up a line graph with the necessary information brought in from the eternal link. Like I said above, that part works perfectly, but what DOESN'T work is the color. Here's what my option field currently looks like:   <option name="charting.fieldColors">{"MyField":$color$}</option>     The idea here is if I add future systems, I don't have to keep punching in hex codes for colors, I just enter a color name token. Unfortunately, what ends up happening is the line graph color is black, no matter what color I use. If I take the $color$ token out of the code and put in the hex code directly it works fine. It also works if I put the hex code directly in the system selection instead of the color name token. Is there a trick to having a token reference another token in a dashboard? Or is this one of those "quit being fancy and do it the hard way" type of things? Any help will be appreciated. Running Splunk 8.2.4, in case it matters.
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but f... See more...
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but for some, it seems the SQS queue name is missing. When we try to configure these inputs, we immediately receive a 404 error in the python.log. Please see the screenshot below for reference. Furthermore, the error message indicates that the SQS queue may not be present in the given region. However, we have confirmed that the queue does exist in the specified region. Has anyone else experienced this issue and can offer assistance? Thank you.