All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Some KOs are not found on the GUI > Settings > Searches, Reports and alerts > "search" with its name. The version  we currently operate is 8.2.1 and SH Clustered. This happens quite frequently for ... See more...
Some KOs are not found on the GUI > Settings > Searches, Reports and alerts > "search" with its name. The version  we currently operate is 8.2.1 and SH Clustered. This happens quite frequently for mostly the alerts we make changes to the search strings.
Hi community! I have a dashboard that shows the alerts on table and in the graph, the questions is How I can link each fired alert to respective saved search? I paste one image  
Hello community, since a couple of months ago we are having an issue into Splunk and is so weird... The issue is that we are working as usual and suddenly we lost the session and when we refresh ... See more...
Hello community, since a couple of months ago we are having an issue into Splunk and is so weird... The issue is that we are working as usual and suddenly we lost the session and when we refresh the UI is shown like the image below: So as you can see the menu at the right top is broken and also the icon is broken. Also if you try to click one of the available option, don't work. We have a cluster search head and a Load Balancer.   If you have any idea I'll really appreciate it.   Thanks in advance. Version 8.2.2
Hello, I have an existing json object and I'd like to merge another json object into it. I don't want to combine them into an array. I'd like them merged. Any ideas how I'd do this?     | eval ... See more...
Hello, I have an existing json object and I'd like to merge another json object into it. I don't want to combine them into an array. I'd like them merged. Any ideas how I'd do this?     | eval object1=json_object("somekey","value")         | eval object2=json_object("someOtherKey","value")     Combined Value:     {"somekey":"value","someOtherKey":"value"}        
I want to send an alert when a situation has been corrected. for example If i setup an alert for low diskspace on a host and I set this alert up to check every 15 minutes, with range back 15 minutes.... See more...
I want to send an alert when a situation has been corrected. for example If i setup an alert for low diskspace on a host and I set this alert up to check every 15 minutes, with range back 15 minutes. I will get an alert until I correct the low disk issue on the host. but what I want is that when the problem is corrected, splunk somehow would know that it needs to send an "Alert resolved" out. The problem is I only want it to send that "Alert resolved" message out only after an alert was sent out that it was a problem. I would think that somehow a flag for an alert would need to be set when an alert is thrown so that when the condition is corrected, Splunk has a way to know that the previously it was a problem and it would send the "Alert resolved" alert Anyone know of a way to do this? Gary  
I have created a multiselect input using a dynamic list:       <input type="multiselect" token="my_id" searchWhenChanged="true"> <label>My ID</label> <fieldForLabel>my_id</fieldForLab... See more...
I have created a multiselect input using a dynamic list:       <input type="multiselect" token="my_id" searchWhenChanged="true"> <label>My ID</label> <fieldForLabel>my_id</fieldForLabel> <fieldForValue>my_id</fieldForValue> <search> <progress> <condition match="'job.resultCount'==1"> <set token="form.my_id">$result.my_id$</set> <set token="my_id">$result.my_id$</set> </condition> </progress> <query>| tstats values(my_id) as my_id where index=my_index sourcetype IN (my_sourcetype) | mvexpand my_id | table my_id | dedup my_id</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> </input>           I also have a Pie Chart:       <row> <panel> <title>Total Vulnerabilities by My ID</title> <chart> <search base="base_search"> <query>| stats count by my_id</query> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.chart">pie</option> <option name="charting.chart.nullValueMode">zero</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="charting.legend.placement">right</option> <option name="link.exportResults.visible">$exportResults$</option> <option name="link.inspectSearch.visible">$inspectSearch$</option> <option name="link.openPivot.visible">$openPivot$</option> <option name="link.openSearch.visible">$openSearch$</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.my_id">$click.value$</set> </drilldown> </chart> </panel>       I would like to HIDE the panel (not the row) in the event only value is selected from the multiselect.  I would like to SHOW the panel (not the row) in the event more than one value is selected from the multiselect. Is this possible?  I have seen this accomplished for static lists, but I am unable to use a static list in this instance.  Thank you.
We have alert events coming into Splunk & Splunk ITSI that we open Service Now incidents for, but depending on the event contents the incident will need to be routed to different teams. An example s... See more...
We have alert events coming into Splunk & Splunk ITSI that we open Service Now incidents for, but depending on the event contents the incident will need to be routed to different teams. An example scenario is, if the alert comes from server A then set the Service Now assignment group to team A, alerts from all other servers should go to team B. We will have many of these scenarios in our environment, what is the best way to do this?   Thanks in advance!
Hi, I was looking at the latest version for Linux Auditd App & Add-on. Both are listed as 3.1.2 at the source https://github.com/doksu/splunk_auditd but only Add-on shows latest version on Splunkbas... See more...
Hi, I was looking at the latest version for Linux Auditd App & Add-on. Both are listed as 3.1.2 at the source https://github.com/doksu/splunk_auditd but only Add-on shows latest version on Splunkbase. App is still at 3.1.0. https://splunkbase.splunk.com/app/4232 - Add-on - Version 3.1.2 https://splunkbase.splunk.com/app/2642 - App - Version 3.1.0 Should I be downloading App directly from Github? Thanks
I am probably overengineering this but this is the only way I could get a script to execute on UF, via a deployed application's bin folder I have a .path file which executes powershell.exe -comma... See more...
I am probably overengineering this but this is the only way I could get a script to execute on UF, via a deployed application's bin folder I have a .path file which executes powershell.exe -command "& 'path_to_ps1_script'" and it's placed, as stated, in myapp\bin\scripts folder The PS1 script, returns a valid JSON. The app's inputs.conf stanza: [script://$SPLUNK_HOME\etc\apps\<my app>\bin\scripts\myscript.path] disabled=false interval=60 sourcetype=my_source_type source=my_source send_index_as_argument_for_path=false index=my_index As soon as I put index=my_index in my stanza, the data is not being indexed for some reason. If I remove the index, the data is indexed into the default "main" index, however i'm looking for a solution to send that data to an index I specify   Any suggestions ?  
Greetings. Is it possible merge 2 search? If there is any common value than connect it. If there is no match keep the events with null()'s I have tired with join function, but the join funct... See more...
Greetings. Is it possible merge 2 search? If there is any common value than connect it. If there is no match keep the events with null()'s I have tired with join function, but the join function are drop those events where there is no match.
Hello hey, has anyone ever struggled to install the MS ad object app? During the installation it tells me that the baseline is not present in the index, while the sync with the active directory is... See more...
Hello hey, has anyone ever struggled to install the MS ad object app? During the installation it tells me that the baseline is not present in the index, while the sync with the active directory is well done.        
Hi, I want to monitor C++ Application which is in windows machine.  Is it possible to monitor it with only C/C++ SDK agent and without adding any API calls to source code or instrumentation? R... See more...
Hi, I want to monitor C++ Application which is in windows machine.  Is it possible to monitor it with only C/C++ SDK agent and without adding any API calls to source code or instrumentation? Regards, Hemanth Kumar.
Hello Community, As me and the team are trying to configure a custom deployment application which has to be implemented ONLY trough the command line. There should be no interaction with the UI whi... See more...
Hello Community, As me and the team are trying to configure a custom deployment application which has to be implemented ONLY trough the command line. There should be no interaction with the UI while configuring the custom app itself - this is a client request. The application gathers log data from three different log files located on a Windows Server 2019 and the main idea is to have this app in order to properly segment all the information coming from those logs. After the information is gathered it has to be searchable in the default Splunk Search Application. As read in the documentation - the deployment application is placed under : $SPLUNK_HOME/etc/deployment-apps The underlying file structure is as follows: local ( contains: inputs.conf, props.conf ) metadata ( contains: local.meta ) All the configuration which we have performed is defined into props.conf regarding the specification of the custom fields which we want to display into the Splunk Search Application. Underneath you can refer to the props.conf file itself: [inwebo:synclog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %d.%m.%Y %H:%M:%S category = Miscellaneous description = InWebo Sync Activation Mails Log disabled = false pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = EXTRACT-TimeStamp = ^(?P<TimeStamp>\d+\.\d+\.\d+\s+\d+:\d+:\d+) [inwebo:iwdslog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Miscellaneous pulldown_type = 1 EXTRACT-TimeDate = ^(?P<TimeDate>[^,]+) EXTRACT-Status = ^[^ \n]* (?P<Status>[^ ]+) MAX_TIMESTAMP_LOOKAHEAD = description = InWebo IWDS Log disabled = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S [inwebo:gdriveuploaderlog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %d-%m-%Y %H:%M:%S category = Miscellaneous description = GDrive Uploader (InWebo Service) pulldown_type = 1 disabled = false EXTRACT-DateTime = ^(?P<DateTime>\d+\-\d+\-\d+\s+\d+:\d+:\d+) EXTRACT-Status = (?=[^U]*(?:Upload|U.*Upload))^(?:[^ \n]* ){3}(?P<Status>\w+) As you are able to observe, the bolded attributes is what we thought should be enough in order to have the necessary new fields presented into the Splunk Search Application. As for the inputs.conf file, we have pre-defined the necessary information about the logs location and also the index which the information should be gathered into. Here's a small sample just in case there is misconfiguration over there: [monitor://E:\inWebo-Prod-Varn-4358\log\IWDS.log] disabled = false index = mfa_inwebo sourcetype = iwdslog host = BJKW1PZJFLTFA01 ** the other logs are specified analogically into the same file In order to create a relation between the deployment application which we have implemented and the Default Splunk Search Application, we have added the index configuration into the Default Search Application configuration inside the indexes.conf file: [mfa_inwebo] coldPath = $SPLUNK_DB/mfa_inwebo/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/mfa_inwebo/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/mfa_inwebo/thaweddb archiver.enableDataArchive = 0 bucketMerging = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableOnlineBucketRepair = 1 hotBucketStreaming.deleteHotsAfterRestart = 0 hotBucketStreaming.removeRemoteSlicesOnRoll = 0 hotBucketStreaming.reportStatus = 0 hotBucketStreaming.sendSlices = 0 metric.enableFloatingPointCompression = 1 metric.stubOutRawdataJournal = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel = The main goal is to have an independent deployment application which could be easily transferred to another Search Head and being searchable without additional configuration. That is why we did not define the deployment-app props.conf file into the props.conf of the Default Search Application. The problem we are facing is that, we are not being able to visualize the field extractions into the Default Search Application - they are just not existent as they should be. The index is displayed, the source types are visible as well, all the log information which is necessary is available, but the custom fields are not present. So do you notice any reason why the custom field extractions which are bolded are not displayed into the search results ? I hope the information brings enough clarification, if not I am ready to provide additional resources. Thank you very much for your cooperation and support in advance. Nikola Atanasov
Hi everyone,   This might be a weird question, but I have been testing out ITSI and recently I tried editing some thresholds I set for KPIs I created myself. Regardless of the service or KPI I tr... See more...
Hi everyone,   This might be a weird question, but I have been testing out ITSI and recently I tried editing some thresholds I set for KPIs I created myself. Regardless of the service or KPI I try to edit, the Thresholding drop down does NOT open. I can open the "Search and Calculate" drop down no problem, as well as the "Anomaly Detection" drop down (and get a java warning to boot).  I feel like this might be an issue with the system, but why specifically Thresholding? What happens is the arrow will point down like it was opened, but nothing will actually happen. I tried different browsers too. If anyone can help me it would be very much appreciated.
Hi Spelun Community team, I have Observed High number of events(log) from WinEventLog:Security . Please suggest best practice or solution to reduces /suppresses the events. I have referred  below ... See more...
Hi Spelun Community team, I have Observed High number of events(log) from WinEventLog:Security . Please suggest best practice or solution to reduces /suppresses the events. I have referred  below Document  found that current ADD On i upto date. https://docs.splunk.com/Documentation/WindowsAddOn/8.1.1/User/Configuration    
Hi there. As in subject, how to make NMON aggregated data available NOT ONLY TO ADMIN users? I can query alla data from NMON since i'm admin in System, but i would like to make some metrics availab... See more...
Hi there. As in subject, how to make NMON aggregated data available NOT ONLY TO ADMIN users? I can query alla data from NMON since i'm admin in System, but i would like to make some metrics available to public normal users Dashboards, all query from normal users returns no data... how to? I tried to give read grant to ALL in all nmon eventtypes, but with no success. Should i have to edit ALL NMON objects??? Thanks
Hi there, we have implemented a component to send logs to splunk cloud but we are getting lot of extra blank lines along with the logs. We have implemented all the components using containerization... See more...
Hi there, we have implemented a component to send logs to splunk cloud but we are getting lot of extra blank lines along with the logs. We have implemented all the components using containerization technology and want to know why we are getting extra blank lines along with the logs. we are using "splunk" as docker loggingdriver and sending events via HttpEventCollector(HEC). 
Is there a way to export a glass table created in ITSI, so that it can be used as an iFrame link?  Currently receiving a '[stackname] refused to connect' error when attempting to add the glass tabl... See more...
Is there a way to export a glass table created in ITSI, so that it can be used as an iFrame link?  Currently receiving a '[stackname] refused to connect' error when attempting to add the glass table link to an internal website.
  I've got this error when testing to create an incident.
Good afternoon! I want to know how splunk stores data. I can't find detailed information. Can I connect a DBMS to splunk using the example: ms sql, mysql in order to store data that falls into splun... See more...
Good afternoon! I want to know how splunk stores data. I can't find detailed information. Can I connect a DBMS to splunk using the example: ms sql, mysql in order to store data that falls into splunk. What is the default database for splunk? What type of database is this database (relational or NoSQL). How does splunk store data, such as coming from json files.