All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Could you help me in editing the below search  index=test sourcetype="centino" | stats count, values(change_asset) as changed_asset, values(brief) as description, values(severity) as severity... See more...
Hi, Could you help me in editing the below search  index=test sourcetype="centino" | stats count, values(change_asset) as changed_asset, values(brief) as description, values(severity) as severity, values(exploitation_method) as exploitation_method, values(first_find) as first_find, values(last_find) as last_find, , values(systems) as system by id. 1. In the below output of fields we need to display only the date 2023-01-22  first_find                                                 last_find 2. Instead of receiving all the notifications we require, if today's date matches the first _find or the last_find, raise an alert *todays date will change every day do not bound that with actual todays date* Thanks...
I'm trying to get the kong plugin to work with Splunk Observability cloud. Here is my agent_config.yaml relating to kong:     recievers smartagent/kong: type: collectd/kong host: 127.0.0... See more...
I'm trying to get the kong plugin to work with Splunk Observability cloud. Here is my agent_config.yaml relating to kong:     recievers smartagent/kong: type: collectd/kong host: 127.0.0.1 port: 8000 service: pipelines: metrics: receivers: [hostmetrics, otlp, signalfx, smartagent/signalfx-forwarder, smartagent/kong]     When I start my Splunk OTEL collector I am getting metrics from the server but not the Kong service, checking journalctl I see:     otelcol[25528]: 2023-01-30T17:08:05.855Z error signalfx/handler.go:189 Traceback (most recent call last): otelcol[25528]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.8/site-packages/sfxrunner/scheduler/simple.py", line 57, in _call_on_interval otelcol[25528]: func() otelcol[25528]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/kong/kong/reporter.py", line 56, in update_and_report otelcol[25528]: self.kong_state.update_from_sfx() otelcol[25528]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/kong/kong/kong_state.py", line 63, in update_from_sfx otelcol[25528]: self.update_resource_metrics(status['signalfx']) otelcol[25528]: KeyError: 'signalfx' otelcol[25528]: {"kind": "receiver", "name": "smartagent/kong", "pipeline": "metrics", "monitorID": "smartagentkong", "monitorType": "collectd/kong", "runnerPID": 25545, "createdTime": 1675098485.8552756, "logger": "root", "sourcePath": "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.8/site-packages/sfxrunner/logs.py", "lineno": 56}       I have installed the kong plugin using these instructions:  https://docs.splunk.com/Observability/gdi/kong/kong.html
I have 3 panels across the top of my dashboard and one table panel underneath.  How do I fix the top row (3 panels) from scrolling? How do I remove the larger vertical scroll bar?  The image below... See more...
I have 3 panels across the top of my dashboard and one table panel underneath.  How do I fix the top row (3 panels) from scrolling? How do I remove the larger vertical scroll bar?  The image below shows how the table has a scroll bar but the overall dashboard also has a scroll bar.  How do I remove this outer scroll bar?          
Hello! I am trying to map a search in Splunk Studio Dashboards to create a time chart showing a machines utilization per day. I want to show it by day so I can add a trend line to my single value uti... See more...
Hello! I am trying to map a search in Splunk Studio Dashboards to create a time chart showing a machines utilization per day. I want to show it by day so I can add a trend line to my single value utilization panel. To do this, I am mapping my search by day so, the utilization will be calculated per day rather than over the whole-time range. Using the code below I am able to make a time chart displaying the machines daily utilization in dashboard classic but not dashboard studios: Code: index=example |bin span=1d _time |dedup _time | eval start=relative_time(_time,"@d-1d"), end=relative_time(_time,"@d") |eval day=strftime(_time,"%D %T") |eval End=strftime(end,"%D %T") |map maxsearches=30 search="search index=example earliest=\"$$start$$\" latest=$$end$$ | transaction Machine maxpause=300s maxspan=1d keepevicted=T keeporphans=T | addinfo|bin span=1d _time | eval timepast=info_max_time-info_min_time | eventstats sum(duration) as totsum by Machine _time  |dedup Machine _time | eval Util=min(round( (totsum)/(timepast) *100,1),100) | stats values(Util) as \"Utilization\" by Machine _time date_mday" |table _time Utilization Machine |chart values(Utilization) by _time Machine |fillnull value="0" Code Results in Dashboard Classic: Code result in Dashboard Studio:   Why can't I map on Dashboard Studio?? It states it is waiting for an input. How can I break up utilization by day to show the trend line?
I added 2 buttons (Delete + Update) to each row in a table. I used the example Script from from https://community.splunk.com/t5/Splunk-Search/How-do-you-add-buttons-on-table-view/m-p/384712 -> table_... See more...
I added 2 buttons (Delete + Update) to each row in a table. I used the example Script from from https://community.splunk.com/t5/Splunk-Search/How-do-you-add-buttons-on-table-view/m-p/384712 -> table_with_buttons.js.  In general all is working fine, the most time. But sometimes when I do a Browser Reload the Javascript is not running and the buttons are not colored. If I'm using one of the dropdowns and select an item, the buttons are colored immediately. It should be looking like this: I  adapted the script a little bit and I ran always a https.//host/en-US/_bump after each change.   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { //console.log("Enable this custom cell renderer for field"); return _(["Update","Delete"]).contains(cell.field); }, render: function($td, cell) { //console.log("Add a class to the cell based on the returned value"); var strCellValue = cell.value; if (cell.field === "Update") { var strHtmlInput="<input type='button' style='background-color:DodgerBlue' class='table-button btn-primary' value='"+strCellValue+"'></input>"; } else if (cell.field === "Delete") { var strHtmlInput="<input type='button' style='background-color:OrangeRed' class='table-button btn-primary' value='"+strCellValue+"'></input>"; } $td.append(strHtmlInput); } }); mvc.Components.get('taskCollectionTable').getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.table.addCellRenderer(new CustomRangeRenderer()); tableView.table.render(); }); });     And this part in the dashboard:   <row depends="$alwaysHideCSSPanel$"> <panel> <html> <style> #taskCollectionTable table tbody tr td{ cursor: default !important; } #taskCollectionTable table tbody tr td input.table-button{ width: 83px !important; position: relative; left: 5%; } </style> </html> </panel> </row>     What I'm missing or doing wrong?  Thanks
Hi everyone, I'm kinda new to splunk. I have two indizes: Stores events (relevant fields: hostname, destPort)       2. Stores information about infrastructure (relevant fields: ho... See more...
Hi everyone, I'm kinda new to splunk. I have two indizes: Stores events (relevant fields: hostname, destPort)       2. Stores information about infrastructure (relevant fields: host, os) I need to show which Ports are used by which os. From the first index I need to know which host is using which ports. From the second index I want to know which host is using which os. I then want to see, which OS uses which ports. To make it even more difficult, the host is called hostname in index 1 and host in index 2. I basically have to group by hostname and then by OS. How can i achieve this? What concepts can I use? I have two working searches for step 1 and step2, I just dont know how to "join" them together. Any help is greatly appreciated, thanks.
Hi,    I have about 500 hosts to configure syslog.global.loghost on multiple Vcenters.  We are forwarding the logs to a Splunk Universal forwarder. Some ESX host servers keep getting this error. Th... See more...
Hi,    I have about 500 hosts to configure syslog.global.loghost on multiple Vcenters.  We are forwarding the logs to a Splunk Universal forwarder. Some ESX host servers keep getting this error. The host "splunklog-domainname:1514" has become unreachable. Remote logging to this host has stopped. This ends up filling up the Vcenter logs and Vcenter stops responding. Has anyone seen this issue?   Thanks...Tom    
Unable to connect from FortiSOAR to Splunk cloud instance. Healthcheck in FortiSOAR is showing Disconnected, even though I have entered the correct credentials and port.  
Hi,  Good day! I want to export all the data (business transactions) of each and every applications deployed on a server and having AppDynamics agent installed. I want to know also about the below ... See more...
Hi,  Good day! I want to export all the data (business transactions) of each and every applications deployed on a server and having AppDynamics agent installed. I want to know also about the below informations: Source Target Connector API Name Security Integration Pattern Network Connectivity
Hello, preferably based on linux (Redhat for instance), which log collector would you use to collect any kind of log (network devices, Checkpoint Log exporter, system logs, application logs, Window... See more...
Hello, preferably based on linux (Redhat for instance), which log collector would you use to collect any kind of log (network devices, Checkpoint Log exporter, system logs, application logs, Windows servers...)? Any newer solution than using syslog-ng or rsyslog? Thanks.
Is it possible to allow access to the SaaS controller only from an allowed list of IP addresses that is defined by the customer.  I.e. to prevent access from non authorised IP addresses ?
I have a question because I am confused about the majority principle of search header clustering captain election. the question is Assume a dynamic Captain Election. [1] When conducting Captain... See more...
I have a question because I am confused about the majority principle of search header clustering captain election. the question is Assume a dynamic Captain Election. [1] When conducting Captain Election, a majority of the cluster members must vote to be elected. In step 1, more than half means For example, if there are 5 search headers and one is in the Down state, I am confused whether the 3rd generation, which is the majority of the 4th generation, should vote, or whether the 3rd generation, which is the majority of the 5th generation, should vote. In other words, I wonder if it is included even if the target of the vote is down.   [2] Also, according to the majority rule, there must be at least three Search Header. I don't know why Search Header needs to be 3 generations. For example, if there are only two A and B, here B declares to be the captain first, then A votes and B votes for himself, so more than half of the votes are satisfied, so B can become the captain is not it?
Hello, I have a dashboard which is using one token - as a result two things happen: a search is conducted on an index with some static data I want to load data from external source (.txt file... See more...
Hello, I have a dashboard which is using one token - as a result two things happen: a search is conducted on an index with some static data I want to load data from external source (.txt file available online) which changes every hour This is the source:  https://tgftp.nws.noaa.gov/data/observations/metar/stations/KJFK.TXT Desired state: The dashboard takes airport's ICAO code and displays static information about this airport from the index and also loads external current weather information (which changes very frequently). How to load this external data in a dashboard without saving it in Splunk? I would like to state, that I have no option to ingest this data into an index every hour, or create a lookup file locally - this data will change all the time so ideally I would like to fetch it online every time. Many thanks in advance!
I wanted to bring this issue to your attention.  We upgraded from 3.10.0 or DB Connect to 3.11.0 back on November 2022.  We use an external HEC destination for DB Connect to send its to before it get... See more...
I wanted to bring this issue to your attention.  We upgraded from 3.10.0 or DB Connect to 3.11.0 back on November 2022.  We use an external HEC destination for DB Connect to send its to before it gets to Splunk instead of the local/built-in DB Connect destination (and have been for over a year).  There seems to be a bug sending to an external HEC destination.  We started getting complaints in early January 2023 from users that data was missing in Splunk.  We temp moved these inputs back to the internal HEC and the issue went away.  I setup a test DB Connect on 3.11.0 and setup the same inputs on it but sending to external HEC and then to a test index.  We did a search to compare the test data with production data and we saw that throughout the day, there were many times when the inputs ran that the data was not making it into Splunk.  The first clue there was an issue was seeing this in the logs every time the inputs ran: [Scheduled-Job-Executor-3] ERROR c.s.d.s.d.r.HttpEventCollectorLoadBalancer - failed to post events: I remembered that we upgraded DB Connect back in November so I decided to downgrade back to 3.10.0 on the test DB connect server.  The failed to post events error went away and all the data in test and prod matched up with no loss of data.I don't know what changed in DB Connect 3.11.0 and higher (3.11.1 has same issue) but this is a fairly big one for me.  I will stay with 3.10.0 for now but someone from Splunk needs to look into this issue.
Hi I try to list the better ways to map not cCIM compliant data with the good datamodel Is there a better way to use a field alias? And is there another way to correctly map data to be CIM comp... See more...
Hi I try to list the better ways to map not cCIM compliant data with the good datamodel Is there a better way to use a field alias? And is there another way to correctly map data to be CIM compliant? Last question, if we use an addon like the Splunk Add-on for Windows, does it means that our business data will be automatically updated to be CIM compliant? Thanks
Hi All, Need some guidance for calculating SLA  Achieved percentage column.  This is how my results look like after running base search Severity Count_of_Alerts Mean_Time_To_Close SLA Targe... See more...
Hi All, Need some guidance for calculating SLA  Achieved percentage column.  This is how my results look like after running base search Severity Count_of_Alerts Mean_Time_To_Close SLA Target SLA Achieved  in % S1 10 7 mins 8 secs  15 mins   S2 5 6 mins 25 secs 45 min   I have referenced solution provided by @ITWhisperer in https://community.splunk.com/t5/Splunk-Search/adding-percentage-of-SLA-breach/m-p/572942#M199687  but in my case we also have count column.   We are ok with considering only the minutes portion of the Time_to_Close and ignoring the secs if too complicated. How can i calculate my SLA achieved in % . Is it as simple as doing a  | eval SLA_Achieved = (Mean_Time_to_close*SLA_Target)/100 One further optimization would if the SLA % achieved is less than the Target, then perhaps color that cell green else Red in color (something on that lines).
Hello, we run an Indexer that functions as deployment server as well. I have already configured it to use our CA-Cert for the Web-UI port 8000 as well as for the input port 9997, both works prope... See more...
Hello, we run an Indexer that functions as deployment server as well. I have already configured it to use our CA-Cert for the Web-UI port 8000 as well as for the input port 9997, both works properly. However, I wasn't able to set our certificate for communication on the mgm port 8089. For each request, it returns the pre-shipped self-signed certificate. Other solutions from this board didn't work, unfortunately. We are running splunk enterprise v9.0.3 Configs on the indexer: server.conf   [sslConfig] enableSplunkdSSL = true sslVersions = tls1.2 sslRootCAPath = /opt/splunk/etc/auth/<ourcert>.pem sslVerifyServerName = true sslVerifyServerCert = true sslPassword = <PW> cliVerifyServerName = true     inputs.conf   [splunktcp-ssl:8089] disabled = 0 [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/<ourcert>.pem sslPassword = <PW> requireClientCert = false sslVersions = tls1.2 sslCommonNameToCheck = splunk.domain1,splunk.domain2     I'd be really happy, if someone could help me out with this! Thank you!
We have Splunk cloud and SOAR cloud in our environment. We want to integrate SOAR audit log in to Splunk cloud. We have tried with "Splunk App for SOAR" app. App have inbuilt feature of index creatio... See more...
We have Splunk cloud and SOAR cloud in our environment. We want to integrate SOAR audit log in to Splunk cloud. We have tried with "Splunk App for SOAR" app. App have inbuilt feature of index creation so we did it from that inbuit feature.  We can't see any other configuration options. So how this logs can integrate in Splunk.
Hi, I have got a table in a dashboard which shows some information. For every row of the table I want to include a column which shows a link to another dashboard. This link has to be different for ev... See more...
Hi, I have got a table in a dashboard which shows some information. For every row of the table I want to include a column which shows a link to another dashboard. This link has to be different for every row (including the filters in the link like "?form.time_select.earliest=..."). I tried to do that using the $click.value2$ token in the drilldown "Link to custom URL" . The problem with that is, that the token gives back the value of the clicked field but some special characters from the clicked field value are converted to a different format (for example the character "?" is converted into "%3F"). Is there a solution to get the exact string value that is in the clicked field with the $click.value2$ token? Or would there maybe be a better solution to solve my problem?   Thanks for the help!
Hi, I am using Splunk Cloud and we are getting all the logs in IST timezone when IST is my preferred time zone. there are some of the logs reporting in UTC time zone and the logs we are getting t... See more...
Hi, I am using Splunk Cloud and we are getting all the logs in IST timezone when IST is my preferred time zone. there are some of the logs reporting in UTC time zone and the logs we are getting to search head via UTC time zone. i wanted UTC time zone to reflect as IST.  Can you please help me in this way. if the way is to use TZ attribute in props.conf what will be the value for TZ attribute. Please let me know. props.conf must be edited in HF or indexer?  Thanks in advance