All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... plea... See more...
I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... please help me figure out what do I miss...  this is my setup page dashboard (./javascript/setup_page.js is the file I changed without any effects <dashboard isDashboard="false" version="1.1"            script="./javascript/setup_page.js"            stylesheet="./styles/setup_page.css"            hideEdit="true"            hideAppBar="true" >       <label>Setup Page</label>     <row>         <panel>             <html>                 <div id="main_container"></div>             </html>         </panel>     </row> </dashboard>
Hi, I am having troubles with providing the correct regex to extract the hostname from the file location. The file structure looks like this  /var/log/syslog/splunk-lb/ise/switch01.log I need only ... See more...
Hi, I am having troubles with providing the correct regex to extract the hostname from the file location. The file structure looks like this  /var/log/syslog/splunk-lb/ise/switch01.log I need only the switch01 as hostname but splunk add switch01.log. The regex i use is (?:[\/][^\/]*){1,}[\/](\w*) Any idea how to modify the regex to match only switch01? thanks Alex  
I am trying below blogs to use Splunk Cloud Trial version in SAP Cloud Integration. However, I am getting below error when trying to call Splunk Cloud Trial version url https://<hostname>.splunkclo... See more...
I am trying below blogs to use Splunk Cloud Trial version in SAP Cloud Integration. However, I am getting below error when trying to call Splunk Cloud Trial version url https://<hostname>.splunkcloud.com:8088/services/collector/event   Error:  java.net.ConnectException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target, cause: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target   I tried adding the Root certificate in my keystore but still get same error. Also, when trying to add the url to Cloud Connector (after adding root certificate in keystore), I get handshake error.   Is there a way to resolve this ?   Blogs https://community.sap.com/t5/technology-blogs-by-members/splunk-part-1-sap-apim-logging-monitoring/ba-p/13444151 https://community.sap.com/t5/technology-blogs-by-members/splunk-part-2-sap-cpi-mpl-logging/ba-p/13446064    
When we go to look at the UI sometimes, it says the app is missing so the UI is unavailable. When it does let us look at the UI, we can't create anything because the app is missing. I was under the i... See more...
When we go to look at the UI sometimes, it says the app is missing so the UI is unavailable. When it does let us look at the UI, we can't create anything because the app is missing. I was under the impression from the documents that it's created the second you open that UI so unsure what is going on.
Hi all -  I am a Splunk Novice, especially when it comes to writing my own queries.  I have created a Splunk Query that serves my first goal:  calculate elapsed time between 2 events.   Now, goa... See more...
Hi all -  I am a Splunk Novice, especially when it comes to writing my own queries.  I have created a Splunk Query that serves my first goal:  calculate elapsed time between 2 events.   Now, goal #2 is to graph that over a time period (i.e. 7 days).  What is stalling my brain is that these events happen every day - in fact, they are batches that run on a cron schedule, so they better be happening every day!  So I am unable to just change the time preset and graph this, because I am using earliest and latest events to calculate beginning and end.  Here is my query to calculate duration:    index=*XYZ" "Batchname1" | stats earliest(_time) AS Earliest, latest(_time) AS Latest | eval Elapsed_Time=Latest-Earliest, Start_Time_Std=strftime(Earliest,"%H:%M:%S:%Y-%m-%d"), End_Time_Std=strftime(Latest,"%H:%M:%S:%Y-%m-%d") | eval Elapsed_Time=Elapsed_Time/60 | table Start_Time_Std, End_Time_Std, Elapsed_Time Any ideas on how to graph this duration over time so I can develop trend lines, etc?  Thanks all for the help! 
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data... See more...
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data and i cant see the source path in the index in splunk. Is there a reason why am not seeing the source after configuring the inputs.conf 
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline... See more...
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline it prevents them from re-connecting. This is the desired behaviour but seems from the Carbon Black Response app code that a check to see if online has been added. Can this be removed?
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how t... See more...
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how they map to the new values in the new column. The idea is if I have a list of IDs (original) that I want to map to different names in a separate column that represent those original IDS (basically Aliases) but want to keep both of the columns in a list view, how would I go about doing that? Example: Display Original IDs NewIDs P1D Popcorn B4D Banana O5D Opp
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below qu... See more...
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below query to restrict the other message values using mvdedup. But its not filtering. | eval Result=mvdedup(mvfilter(match(message, "File put Succesfully*") OR match(message, "Successfully created file data*") OR match(message, "Archive file processed successfully*") OR match(message, "Summary of all Batch*") OR match(message, "processed successfully for file name*") OR match(message, "ISG successful Call*") OR match(message, "Inbound file processed successfully") OR match(message, "ISG successful Call*") ) )   
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doe... See more...
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doesn't work for me. The closest I have come is this: | localop | rest .... ```first search key field``` |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid|rename soar_uuid as ExistingKey] | table soar_uuid,triggered,rule.name,title,classification,url,ExistingKey The above returns  a list of new records with a blank ExistingKey field, and matching keys for soar_uuid  of existing records with a blank soar_uuid field.  If I could just populate either with the other field, then I could remove all the duplicates. I want to remove the new records that match the existing records before writing the events to the index. appendsearch instead of append doesn't seem to return the existing records. 
Just in a situation where I have 2 servers, where 1 is active and the other is passive. I had to deploy the TA on both the servers and report the service status of a service. So the active server ... See more...
Just in a situation where I have 2 servers, where 1 is active and the other is passive. I had to deploy the TA on both the servers and report the service status of a service. So the active server would be reporting the service is "Running" and the passive server would say the service is "stopped" I have tried writing up a SPL but my only worry is if there is a situation when the service stops on the active server how to get it reported. or if there is no data from the active server. There should be atleast 1 server reporting the service is "Running" always. Only during the DR situation the server name would change index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 | stats values(host) AS active_host BY status | where status=="Running" | append [ search index = mday source =service_status.ps1 sourcetype = service_status os_service="App_Service" host=*papp01 | stats latest(status) AS status by host,os_service,service_name ] | filldown active_host | where active_host=host AND status!="Running" | table host,active_host,os_service,service_name,status   Any help is much appreciated
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: ... See more...
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: system-alias system-ip prod 192.168.1.1 dev 192.168.2.2 prod 192.168.1.2   so what a search query should look like if i want to serach only for prod ip`s ?   P
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenki... See more...
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenkin Build status, Access log and other statistical data.. Could you please check and confirm.. Thanks.
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platfo... See more...
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platform to more than 1 hour on our production platform. Do you know if any configuration in the add-on could affect the performances of some operations that could fully depend on the the platform configuration?   Thanks a lot for your suggestions!
Hi Splunkers, I have an issue with a log forwarding from an HF to Splunk Cloud and I need a suggestion about troubleshooting. In this environment, some firewalls has been set to send data to an HF. ... See more...
Hi Splunkers, I have an issue with a log forwarding from an HF to Splunk Cloud and I need a suggestion about troubleshooting. In this environment, some firewalls has been set to send data to an HF. Then, data goes to a Splunk Cloud. So, the global flow is: Firewall's ecosystem -> HF -> Splunk Cloud On HF, a network TCP input has been configured and it works fine: all firewall added until now send correctly data to Cloud. Yesterday, firewall admin has configured a new one to send data to Splunk, but I cannot see logs on our env. So, first of all, I asked to Network Admin a check about log forwarding configuration and all has been done properly. Then, I checked if logs are coming from Firewall to HF; a simple tcpdump on configured port show no reset or other suspicious flags. All packe captured has [P] and [.] flag, with ack. So, data arrives where they are supposed to be collected. So, I checked _internal logs, based on firewall IP; no error are shown by this search. I got logs from metrics.log and license.log (mainy from metrics), but no error message are returned. By the way, when I query configured index and sourcetype (that collect properly logs from other firewalls), I cannot see the new one. I used both IP and hostname of firewall device, but no logs are returned. I thought: it could be possible that data arrive to HF but, then, they don't go to Cloud? But in such a case, I presume some error logs should spawn. And supposing my assumption is correct, I could I check it?
Hello Community, I have a problem with the lastest Enterprise Security Version. In the Security Posture Dashboard, when I drilldown the Top Notable Events, the URL is returned correctly but one thi... See more...
Hello Community, I have a problem with the lastest Enterprise Security Version. In the Security Posture Dashboard, when I drilldown the Top Notable Events, the URL is returned correctly but one thing breaks the whole Drilldown. The URLEncode sometimes is applied twice. So instead of %20 replacing spaces in the rule name that was used in the drilldown instead the URL includes %2520. This breaks the Drilldown and then shows all Rule Names instead. The weirdest Part about it is that independent of the Rule Name clicked it sometimes works and sometimes doesn't.  A Reload through the associated Button on the Incident Review Page also fixes the error but this is still a nuisance in the daily business. I have searched the Web for similar experiences but haven't found anything: My question is if anybody else has the same problem so I can make sure that this is not some error from local files (which I checked, but its always possible I missed something) but something thats broken by default. Im not fond of changing anything in the deeper Code of Enterprise Security but if anybody has a solution to the problem I'd be glad!
I see different forwarders count using the following different ways: Looking at the forwarder management at the license master Looking at the Forwarders:Deployment at the license master Looking a... See more...
I see different forwarders count using the following different ways: Looking at the forwarder management at the license master Looking at the Forwarders:Deployment at the license master Looking at  dmc_forwarder_assets.csv inside /opt/splunk/etc/apps/splunk_monitoring_console/lookups/  at the license master So, which one should I guarantee and is there any better way?
Are there any good project ideas. I just started creating dashboard for our network team. I am trying to get more security-based projects and was wondering if there are any good ideas to help me get ... See more...
Are there any good project ideas. I just started creating dashboard for our network team. I am trying to get more security-based projects and was wondering if there are any good ideas to help me get into security.  Very new to Splunk!
I am using the SolarWinds plugin, and I want to be able to click the device and it takes me to the device page in SolarWinds.  I have taken the link from the SolarWinds and added the $LinkToken$ to ... See more...
I am using the SolarWinds plugin, and I want to be able to click the device and it takes me to the device page in SolarWinds.  I have taken the link from the SolarWinds and added the $LinkToken$ to the end but it is not taking me there.  Any advice on handling this? I am creating a dashboard using Splunk studio.
Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head... See more...
Hi   We are trying to integrate the data which is on Splunk to ELK, Using Heavy forwarder can anyone suggest how inputs.conf can be configured so that it listens to the data which is on search head and then using outputs.conf we can send the data to ELK via stash   Thanks