All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, can I ask you for helping me with this small problem, please? If I read the content of the lookUp using any criteria I receive attribute Attr1. This Attr1 is multivalue attribute. Attr1 747... See more...
Hi, can I ask you for helping me with this small problem, please? If I read the content of the lookUp using any criteria I receive attribute Attr1. This Attr1 is multivalue attribute. Attr1 7470486 7470487 7470597 7470543   I want to create query, where will be index=test ID=7470486 OR 7470487 OR 7470597 OR 7470543    
  I am trying to configure formatting for each column in the table on dashboard but column formatting is not changing        "columnFormat": {             "ColumnA": {                 "al... See more...
  I am trying to configure formatting for each column in the table on dashboard but column formatting is not changing        "columnFormat": {             "ColumnA": {                 "align": "> table | pick(alignment)"             }         }          "alignment": [             "center"         }]    
Hi,   I am trying to show the number of DNS logs per hour here on a graph with the upper and lower bound lines showing on the same plot. This is my current query: | tstats count where index=... See more...
Hi,   I am trying to show the number of DNS logs per hour here on a graph with the upper and lower bound lines showing on the same plot. This is my current query: | tstats count where index=dns groupby _time span=1h | eval time=strftime(_time,"%Y-%m-%d %H:%M:%S") | stats avg(count) as lambda | eval alpha=0.01 | eval lower=lambda/(2*n) | eval upper=lambda/(2*(1-alpha)) | timechart span=1h sum(count) as count, avg(lower) as lower, avg(upper) as upper Currently, nothing is outputted:   Can you please help? Thanks
Good afternoon. When filtering events before indexing there is something wrong, because for unknown reasons this filter is not being applied correctly and events are still being sent which should be... See more...
Good afternoon. When filtering events before indexing there is something wrong, because for unknown reasons this filter is not being applied correctly and events are still being sent which should be discarded. The configuration I have in the props and transforms files is the following (this filter has been applied both locally, and in the default and local folders of the corresponding app)   [props] [traffic] (This would be the sourcetype through which these events pass.) TRANSFORMS-null= filter_to_null   [transforms] [filter_to_null] REGEX = (?:6|7)\.company.com\.es\s\d(?:[^,]*,){25}(?!53|389|3389|445|20|21|22|23|80|8443|8080|443) DEST_KEY = queue FORMAT = nullQueue   I have seen different forums but I can't find the problem and in principle the regex does work as it has been tested on different sites and on splunk itself.
Hi, Today we have a standalone enterprise security (ES) search head which we would like to migrate to a new ES search head cluster (and keep all the knowledge objects). Is there instructions on how... See more...
Hi, Today we have a standalone enterprise security (ES) search head which we would like to migrate to a new ES search head cluster (and keep all the knowledge objects). Is there instructions on how to turn a standalone ES search head into a ES search head cluster? Thanks, Sebastian
How can we use Octopus access token only to integrate this REST API Modular Input Add On. The only options we have on octopus is to use Oauth1 and basic authentication. However we only have user cred... See more...
How can we use Octopus access token only to integrate this REST API Modular Input Add On. The only options we have on octopus is to use Oauth1 and basic authentication. However we only have user credentials and Octopus access token only 
Using Splunk 8.2.2.2, My dashboard is loaded with a bunch of big queries, But it makes my browsers (tested a few, such as, Chrome and Opera) crash whenever I want to browse through it. Not only my... See more...
Using Splunk 8.2.2.2, My dashboard is loaded with a bunch of big queries, But it makes my browsers (tested a few, such as, Chrome and Opera) crash whenever I want to browse through it. Not only my dashboard tab, but also any other taps open with Splunk. Is there any solution on this? Best regards Benjamin
Hi, we have successfully been deploying Splunk Universal Forwarder to our Mac fleet via Jamf since about 10.14 (Mojave) Recently we upgraded our fleet to Ventura (13.2.1) and are deploying Splunk UF... See more...
Hi, we have successfully been deploying Splunk Universal Forwarder to our Mac fleet via Jamf since about 10.14 (Mojave) Recently we upgraded our fleet to Ventura (13.2.1) and are deploying Splunk UF 9.0.3 and now we are seeing GateKeeper prompts on users machines asking them to click Allow as the application was downloaded from the Internet?  Nothing has changed in our script or way they we deploy and was wondering if anyone else had run into this?  This is the part of our script handing the install which then prompts the user interaction even though it should be silent #!/bin/sh # NOTE: script assumes there is exactly ONE splunkforwarder*.tgz package already present in /tmp/splunkforwarder*.tgz clear; set -x; cd /opt/;export SPLUNK_HOME=/opt/splunkforwarder tar xvf /private/tmp/splunkforwarder-9.0.3-dd0128b1f8cd-darwin-universal2.tgz cd ./splunkforwarder/  
I am trying to combine the results from 2 different search queries into a single chart.Is there a way to do this? FIRST search:     source="a.csv" OR source="b.csv" OR source="c.csv" Company=... See more...
I am trying to combine the results from 2 different search queries into a single chart.Is there a way to do this? FIRST search:     source="a.csv" OR source="b.csv" OR source="c.csv" Company="x" | eval Created=substr(Created, 1, 7) | eval a=if(State="Closed",1,0) | chart sum(a) AS closed_event by Created     SECOND search:     source="a.csv" OR source="b.csv" OR source="c.csv" Company="x" | eval Created=substr(Created, 1, 7) | chart count by Created,source      I want the first search as a line chart and the second search as a column chart,combining them. Thanks in advance
Hi, I'm trying to find the alerts by user between the period of 2 hours like Alert1,Alert2 Here I need a spl query for this condition * And one more condition spl is  In bwtween 2hours if there is a... See more...
Hi, I'm trying to find the alerts by user between the period of 2 hours like Alert1,Alert2 Here I need a spl query for this condition * And one more condition spl is  In bwtween 2hours if there is an alert for  single user more than 3 times it should raise an alert. Thanks.
We need to monitor/request info from Vyopta platform to create specific dashboards in Splunk. Is it possible with API/Webhooks? Thanks
Hi there, I am trying to convert dashboards from Simple XML to Dashboard Studio, but there are certain features that seem like they don't exist. In this particular use case, i have a table with mul... See more...
Hi there, I am trying to convert dashboards from Simple XML to Dashboard Studio, but there are certain features that seem like they don't exist. In this particular use case, i have a table with multiple fields like user, website and so forth. What I want is Drilldowns, which only work when a certain column is clicked: When clicked on the "user" column, I want to set the field of the column and row as a token. When clicked on the "website" column, I want to open a link to whois of this domain. This works quite well with Simple XML, as i can just set conditions on the field:   <drilldown> <condition field="website"> <link target="_blank">https://who.is/whois/$row.website$</link> </condition> <condition field="user"> <set token="username">$row.user$</set> </condition> </drilldown>     But how do I do that in JSON in the Dashboard Studio? I know that by adding multiple entries with "eventHandlers", I can specify multiple different types of drilldowns, even though the UI just allows you one:   "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "username", "key": "row.user.value" } ] } }, { "type": "drilldown.customUrl", "options": { "url": "https://who.is/whois/$row.website.value$", "newTab": true } } ]   However, I can't find a way to set a condition on a field like in Simple XML. Can anybody tell me how i can do this with JSON? Docs don't mention real coding with JSON. Thanks!
 Hi  Monitoring services written in .NET with appdynamics, at some point we got a problem with high CPU usage and we when looking at the node-dashboard and the server info the part showing the proce... See more...
 Hi  Monitoring services written in .NET with appdynamics, at some point we got a problem with high CPU usage and we when looking at the node-dashboard and the server info the part showing the processes shows memory usages above 100% for some processes.   Is this a know issue could it be appd having problem getting correct data when under heavy load?
We have recently upgraded our Splunk cluster to 9.0.1 and one of the application team has highlighted that they are receiving alert mail twice for the same alert with same results in both alert mail.... See more...
We have recently upgraded our Splunk cluster to 9.0.1 and one of the application team has highlighted that they are receiving alert mail twice for the same alert with same results in both alert mail.  Alert condition is every 2hrs    what could the reason behind it?
Hi All, How can we stop duplicate notables which are getting generated in the Incident Review page for same event id in the Enterprise security . Do we need to adjust any settings ? Time range  ... See more...
Hi All, How can we stop duplicate notables which are getting generated in the Incident Review page for same event id in the Enterprise security . Do we need to adjust any settings ? Time range  Earliest = -70M Latest = -10M Cron schedule : every 35 Minutes It's happening for all correlation searches . Regards VK  
Hi I am trying to whitelist some traffic from my search. So I decided to create a look up table including src ip, dst ip, username, dst port, src zone. for example: src_ip dest_ip src... See more...
Hi I am trying to whitelist some traffic from my search. So I decided to create a look up table including src ip, dst ip, username, dst port, src zone. for example: src_ip dest_ip src_zone dest_port user whitelisted * * center * * TRUE 172.16.20.44 13.58.90.11 * 443 Alice TRUE * 128.221.236.246 * 443 * TRUE 192.168.12.03 * * * * TRUE 172.16.20.13 * * * * TRUE 192.168.26.4 * * * * TRUE 192.168.26.8 198.160.25.74 * 443 * TRUE 192.168.26.9 198.160.25.87 * * * TRUE * 142.250.70.174 * * * TRUE               but the problem is the search matches when all cells related to fields which are called in lookup command have values, but it does not match (does not whitelist) if a cell has "*" or "Any"  value. | lookup whitelisttest.csv src_ip as src_ip dest_ip as dest_ip dest_port as dest_port user as user | where isnull(whitelisted)
Hi all, I would like to ask is that a way to add a another field for filtering in the Splunk ES incident review page? Currently there are only some default fields for filtering such as security dom... See more...
Hi all, I would like to ask is that a way to add a another field for filtering in the Splunk ES incident review page? Currently there are only some default fields for filtering such as security domain, search type and status  Regards, Colin
       Hi Team, I have been trying to create a custom app with some config files and created it manually with the directory structures. But once submitted, I came across the... See more...
       Hi Team, I have been trying to create a custom app with some config files and created it manually with the directory structures. But once submitted, I came across the below failures as mentioned by splunk.   splunk. please help me how I can solve the issues and also let me know if there are any other ways to package the app if I have some config items
So I have been working on migrating usecases from one splunk ES to splunk cloud for a client. They had around 760+ correlation searches created for similar usecases and field extraction. So I created... See more...
So I have been working on migrating usecases from one splunk ES to splunk cloud for a client. They had around 760+ correlation searches created for similar usecases and field extraction. So I created a lookup and used lookup definition to create the correlation search. They are pointing out that this might affect the notable creation when there is a knowledge bundle fail. Also they highlighted it will lead to skipped searches. I have built similar cases for other environments with more huge lookup and search over huge data. My present lookup have 764 entries and the correlation search runs for every 15min cron looking back on the 1hour data.   so I would like the help of the experts, to answer    1. if consolidating 760 separate searches are efficient or 1search with the lookup is efficient and how 2. In case of knowledge bundle fail will it be affected and miss alerting? 3. Will it cause skipped searches if the schedule goes over? p.s. - However my search should re trigger and anyhow is checking over last 1hour data every 15min please help on this matter