All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: au... See more...
Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: auto, priority: auto Trigger alert when greater than 0 Throttling > window duration: 0 Response action > To:mymailid, priority: normal, Include: Link to alert, link to result, trigger condition, attach csv, Trigger time In this case, mail is not getting delivered regularly. If I try executing the same SPL query in search, it showing more than 300 rows result
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=... See more...
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=my_index eventStatus=fault | stats count by eventStatus Searching "Last 15 minutes", giving say 100 results, can I trigger an alert IF the same search in the same 15 minute timeframe 1 day ago is for example 10% higher or lower?   Thanks
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdir... See more...
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdirectories that we do not want to monitor. The log files that we want to monitor are in subdirectories and these subdirectories rotate every day. When MFT launches a flow for today, for exemple, it creates a sub-directory: /data/mft/efs/logs/2024-07-02/mft_flow.log  I created an inputs.conf file :    [default] _meta=env::int-test [monitor:///data/mft/efs/logs/*] disabled=false sourcetype=log4j host=test-aws-lambda-splunk-code followTail=0 whitelist=\d{4}-\d{2}-\d{2}\/.*\.log index=test_filtre    But I don’t get anything in my Splunk Enterprise. Anyone can help me ? 
Hello, I have the Unix/Linux Add-on installed in my Splunk Cloud. This Add-on gives me a list of Inactive Hosts. How do I create an episode 1 to 1 that alerts me every time a new host goes inactive?
Hello, I want to setup MTBF & MTTR  for databases and its servers in AppDynamics, kindly direct to a knowledge based document on how to achieve this aspect.
Hi guys, My boss check on Splunk Master and see that, he want to know  index, source, sourcetype, capacity of log/day for each sourcetype, How can I see that I used this search before, but I fe... See more...
Hi guys, My boss check on Splunk Master and see that, he want to know  index, source, sourcetype, capacity of log/day for each sourcetype, How can I see that I used this search before, but I feel its not corect 100%, | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb How I can check   this on my Indexer, I can ssh to Indexer too. Thank you for your time
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign cont... See more...
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign containers/tasks to them. Is there a way  to restrict visibility/assignment on other users in the platform? I know it probably have should be realted to users & roles permissions but I' not getting it right... Thanks
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and a... See more...
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and analysis.
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the p... See more...
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the page it is showing authentication success and later  it was showing permission issue. Please help me on priority!!. Please find the attached screenshot for your reference.  error screenshot Thanks & Regards, PadmaPriya
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/sp... See more...
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" # Make the API call through the HTTPS proxy with SSL verification response = requests.get(url, proxies=proxies, verify=ca_bundle_path) print("Response content:", response.content) If I use this code in separate python script.. It works and gives the response. However, If I use the same code in splunk, It doesn't. I get : SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1106)')))   The code that is being used is : files = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'App-Name', 'certs') pem_files = [f"{files}/{file}" for file in os.listdir(path=files) if (file.endswith('.pem') or file.endswith('.crt'))] url = f"{url}/core/api-ua/v2/alerts/attack-surface?type=open-ports&size=1&key={api_token}" if pem_files: logger.info(f"Certificate used: {pem_files[0]}") logger.info(requests.__version__) logger.info(urllib3.__version__) logger.info(proxy_settings) response = requests.request( GET, url, verify=pem_files[0], proxies=proxy_settings ) response.raise_for_status()   In the place of verify=pem_files[0],I have added verify="/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" Still same error.
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transa... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transaction Detection: Configured automatic transaction detection rules.  Custom Match Rules: I Created custom match rules specifically for MSMQ operations but did not see the expected results. we are expecting MSMQ Entry point for .NET consumer application. I want to know how much time the data has been present in the MSMQ. I followed the instructions provided in the link below, but they didn't help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrang... See more...
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrange option in the timechart command. However, I use a simple "| table _time, y1, y2, yN" instead of timechart, because I want the real timestamps in the graph, not some approximation due to timechart's notorious binning. To mimic the fixedrange behavior, I append a hidden graph with just two coordinate points (t_min|0) and (t_max|0):   ... | table _time, y1, y2, y3, ..., yN | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   This appended search appears very cheap to me - it alone runs in less than 0.5 seconds. But now I realized that it makes the overall search dramatically slower, about x10 in time. The number of scanned events explodes. This even happens when I reduce to:   | append maxout=1 [ | makeresults count=1 ]   What's going on here? I would have expected the main search to run exactly as fast as before, and the only toll should be the time required to add one more line with a timestamp to the end of the finalized table, no?
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. ... See more...
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. I tried multiple times..But it failed. No error in the UI. But i can see 403 in inspection: POST https://classic.splunkbase.splunk.com/api/v0.1/app/4241/new_release/ 403 (Forbidden) Could you please let me know what is going on here?
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure prev... See more...
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure previously managed by another company. We are in charge of AS IS management and, at the same time, perform the migration to new environment.  The Splunk new env setup is done, so now we need to migrate data flow. Following Splunk best pratice, we need to temporarily perform a double data flow: Data must still go from log sources to old env Data must also flow from log sources to new env. We already faced, on another customer, a double data flow, managed using Route and filter data doc and support here on community. So the point is not: we don't know how it works. The issue is: something is not going as expected. So, how the current env is configured? Below, key elements: A set of HFs deployed on customer data center. A cloud HF in charge of collecting data from above HFs and other data input, like network ones. 2 different indexers: they are not on cluster, they are separated and isolated indexer. The first one collect a subset of data forwarded by cloud HF, the second one the remaining one. So, how cloud HF is configured for tcp data routing? In $SPLUNK_HOME$/etc/system/local/inputs.conf, two stanza are configured to receive data on ports 9997 and 9998; configuration is more or less:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group   Then, in $SPLUNK_HOME$/etc/system/local/outputs.conf we have:   [tcpout] defaultGroup=Indexer1_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997   So, the current behavior is: Log collected on port 9997 of Cloud HF are sent to Indexer1 Log collected on port 9998 of Cloud HF are sent to Indexer2 Everything else, like Network Input data, is sent  to Indexer1, thanks to default group settings. At this point, we need to insert new environment hosts; in particular, we need to link a new HFs set. At this phase, as already shared, we need to send data to old env and to new one. We can discuss about avoid to insert another HFs set, but there are some reason about using it and the architecture has been approved by Splunk itself. So, what we have to perform now is: All data are still sent to old Indexer1 and Indexer2. All data must be sent also to new HF set. So, how we tried to perform this? Below there is our changed configuration. inputs.conf:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group, newHFs_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group, newHFs_group      outputs.conf:   [tcpout] defaultGroup=Indexer1_group, newHFs_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997 [tcpout:newHFs_group] disabled=false server=HF1:9997, HF2:9997, HF3:9997     In a nutshell, we tried to achieve: Log collected on port 9997 of Cloud HF are sent to Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent to Indexer2 and new HFs Everything else is sent, thanks to default group settings, to Indexer1 and new HFs So, what went wrong? Log collected on port 9997 of Cloud HF are sent correctly to both Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent correctly to both Indexer2 and new HFs Remaining log are not correctly sent to both Indexer 1 and new HFs. In particular, we should see the following behavior: All logs not collected on port 9997 and 9998, like network data input, are equally sent to Indexer1 and new HFs: a copy to Indexer1 and a copy to new HFs. So, if we have outputs of N logs, we must have 2N logs sent: N to Indexer1 and N to new HFs What we are seeing is: All logs not collected on port 9997 and 9998, like network data input, are auto load balanced and splitted between Indexer1 and new HFs. So, if we have outputs of N logs, we see N sent: we have more or less 80% sent to Indexer1 and remaining 20% to new HFs. I underlined many times that some kind of logs not collected on port 9997 and 9998 are the Network ones, because we are seeing that auto lb and log splitting is happening most of all with them.
How to setup  Jamf Compliance Reporter Add-on in Splunk. Couldn't find the documentation for this App. Please share if you have it or can walk me through the process. Thank You!
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFE... See more...
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFER | Data | LOGS | Fetched count:345243 time:102445ms time: 2024-07-01T16:57:17.022583728Z   I want result like - | count           | time | | 2528945    | 130444 | Query that I am writing  base search | stats count by count | stats count by time | table count time For  stats count by count I am getting error -  Error in 'stats' command: The output field 'count' cannot have the same name as a group-by field Query isn't right, correct solution would be helpful. Also tried different queries different ways.    
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, sin... See more...
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, since OnTAP is eliminating its support for legacy ZAPI/ONTAPI Can anyone provide information as to the long-term prospects of this or another App which would collect data from Netapp OnTAP?
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what t... See more...
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what to give in the username and password and there is an option to use bearer token where to use and how to use the token.  I need to create a custom search  to fetch the data.   @gcusello  your inputs are needed on this.
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below sear... See more...
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below search,  for an index i get a usage value of 4587.16 which would be 4.59 terabytes per day. I am looking for this number to be rounded in the search results to show like 4.59 index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024,2)
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are ... See more...
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are field names. Alert values(status) Total_Count 001_Phishing_Alert In progress Resolved On-Hold 5 002_Malware_alert In-progress Resolved 6 003_DLP_Alert In-Progress 4 Desired / Expected output:  Want to split in based on each individual status value Alert Count In-Progress Resolved On-Hold 001_Phishing_Alert 5 3 1 1 002_Malware_Alert 6 3 3 0 003_DLP_alert 4 4 0 0 Total 15 8 4 1 I am trying using        |..base search | stats count by Alert, status .... OR |..base search.. | stats count, values(status) by Alert        nothing is working out to show the desired output.  Can someone pls assist?