All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I created a report for finding list intersection of two set A: inputlookup spam_ip (Indicator of compromise) B: index=main (event log)   | inputlookup spam_ip | join srcip [ search index=main | r... See more...
I created a report for finding list intersection of two set A: inputlookup spam_ip (Indicator of compromise) B: index=main (event log)   | inputlookup spam_ip | join srcip [ search index=main | rename ip as srcip | fields srcip ] | summaryindex spool=t uselb=t addtime=t index="threat_summary" file="RMD55f183b338b214f84_487362985.stash_new" name="matches test" marker=""   and Time range : All day (because event in two sets grow daily,) after the report  runs, it adds result into summary index. The problem is the result contains all event added before      
Hi, I have a requirement where we need to categorise events based on the url into 4 separate categories, then calculate the average response time for each category. Finally we want to display all th... See more...
Hi, I have a requirement where we need to categorise events based on the url into 4 separate categories, then calculate the average response time for each category. Finally we want to display all the averages by category together in a stats table, is there an easy way to do this?
Hi , i am sending aws s3 data through aws TA into splunk.In start data is indexing properly after 2 day,stops the indexing of data.In internal logs ,I am getting following error.(expired token).any h... See more...
Hi , i am sending aws s3 data through aws TA into splunk.In start data is indexing properly after 2 day,stops the indexing of data.In internal logs ,I am getting following error.(expired token).any help will be appreciated.
Hi Splunk Community, How can I get the style from html to only apply to just one panel and not the entire dashboard. I wish to decrease the size to just one panel, but it applies the style to the en... See more...
Hi Splunk Community, How can I get the style from html to only apply to just one panel and not the entire dashboard. I wish to decrease the size to just one panel, but it applies the style to the entire dashboard. Below is my code only for the panel I would like to style. Any advise is helpful.    </fieldset> <row > <panel> <html> <style> td { line-height: 10px !important; font-size: 5px !important; } </style> </html> <table> <title>Last Updated</title> <search> <query>| rest /servicesNS/-/-/data/lookup-table-files search="*_Sports_Report.csv" | eval updated=strptime(updated,"%FT%T%:z") | eval desired_time=strftime(updated, "%B %d, %Y") | rename desired_time as "Last Updated" title as Team | table "Last Updated", Team </query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row>  
I'm trying to create a dashboard that shows the count of new vulnerabilities between this month and last month, using scan_id as the differentiator.  The vuln data comes in as a csv with asset_id, sc... See more...
I'm trying to create a dashboard that shows the count of new vulnerabilities between this month and last month, using scan_id as the differentiator.  The vuln data comes in as a csv with asset_id, scan_id, and vuln_id.  The db will have separate queries for unique and total values, based off vuln_id.  My current logic for total vulns is: this_month=count(List of vulns from scan 'x' not in list of vulns from scan 'y') last_month=count(List of vulns from scan 'y' not in list of vulns from scan 'z') index=vulns sourcetype=vuln_scan scan_id = XXXXXX NOT [ search index=vulns sourcetype=vuln_scan scan_id = YYYYYY ] | stats count(vuln_id) as events | eval period=“last_month" | append [ search index=vulns sourcetype=vuln_scan scan_id = YYYYYY NOT [ search index=vulns sourcetype=vuln_scan scan_id = ZZZZZZ ] | stats count(vuln_id) as events | eval period=“this_month”] | fields events, period I feel like there is a much more optimal way to do this, especially since i'm also going to report on persistent and remediated vulnerabilities.  I tried using case (or nested if) to create fields for "this_month", "last_month", and "previous_month", but can't  figure out how to subtract out Y from X without using multiple searches.  The subtraction has to happen before the count, since not all vulns are 'new'.  Any help would be greatly appreciated.  Thanks!
Hi, Found an issue that I was not able to work around with the alert thottling. Given a search that works like this: | mstats avg(_value) as value WHERE metric_name="disk.used_percent"  AND span=5... See more...
Hi, Found an issue that I was not able to work around with the alert thottling. Given a search that works like this: | mstats avg(_value) as value WHERE metric_name="disk.used_percent"  AND span=5m by host, path | eval "Disk Used (%)"=round(value,2) | search "Disk Used (%)" >= 90 AND "Disk Used (%)" < 95 | table host, path, "Disk Used (%)" If I set the throttling to "per result" the problem is if 50 hosts crossed the threshold, I would get 50 individual alerts, in the case I set it up with emails, it would be 50 emails, which in this particular case is non-desirable. If I set the trigger condition to "Once" instead of "for each result", I would now get a single e-mail with 50 instances in the "in-line table" but  the problem I get is that some alerting may be missing during the throttling time, like new instances that reach the threshold during the throttling period (which is the use case we want to solve with the "per result" throttling). Basically what I need is a solution that can give a "smart throttling", silencing alerts based on hosts that have already triggered the alert, and also have a way to clamp all of the occurences at a given point in a single alert event, if possible. Thanks!
Is querying by myField=true or myField != false the same thing? Is saying myField=true better or more efficient? Thank you!
I have a field of titles that are filled with sentences about why a test was failed in a security audit, but they are separated by each asset. So there can be two different assets with the same reaso... See more...
I have a field of titles that are filled with sentences about why a test was failed in a security audit, but they are separated by each asset. So there can be two different assets with the same reason listed but in different words. For example, one might say "Login Password is empty" and another asset failure will say "Login password did not meet requirements". If I could aggregate them based on words like "password", I can get more value from the data. I can't hardcode it because I don't know all the possible aggregates. Here is what I have so far, and I'm open to any feedback: earliest=-1d@d latest=@d index=cdb_summary sourcetype=cfg_summary source=CDM_*_Daily_Summary | search hva=* | eval FailedSTIGs=mvsort(split(FailedSTIGs,",")) | stats values(fismaid) as fismaid dc(asset_id) as Affected by FailedSTIGs,hva | lookup DHS_Expected_Checks "STIG ID" as FailedSTIGs output "Rule Title" | fit TFIDF "Rule Title" as rule_tfidf ngram_range=1-12 max_df=0.6 min_df=0.2 stop_words=english | fit KMeans rule_tfidf* k=8 | fields cluster "Rule Title" | sample 6 by cluster | sort by cluster
I have a lookup with the files that should be sent each hour (common/flat file names) with the hour as the header, I would like to use an eval to set current hour to use to pull back thresholds in lo... See more...
I have a lookup with the files that should be sent each hour (common/flat file names) with the hour as the header, I would like to use an eval to set current hour to use to pull back thresholds in lookup with # of files for that hour: | eval current_hour=strftime(now(),"%H:00") Use the above with inputlookup to pull back fields "field file_names 14:00" as an example: | inputlookup file_monitoring_.csv | fields $current_hour$ lookup is like below: file_name,00:00,01:00,02:00,03:00,etc... file001.csv,5,10,15,20,etc.... file002.csv,0,0,0,1,etc.... file007.csv,105,206,409,727,etc.... file009.csv,1,2,3,4,etc....
Hi,  This photo includes the SPL search for the Microsoft Azure App for Splunk in the Billing Overview: This search no longer results in any events because the properties.pretaxCost and propert... See more...
Hi,  This photo includes the SPL search for the Microsoft Azure App for Splunk in the Billing Overview: This search no longer results in any events because the properties.pretaxCost and properties.usageQuantity fields no longer exist if I search: index=item_prod_event_azure_90d OR index=cyber_prod_event_mscloud_1y sourcetype="azure:billing".   Additionally, the fields seen in the following line also do not show up as fields any longer as well: | dedup properties.usageStart properties.instanceId properties.meterDetails.meterSubCategory Cost properties.meterDetails.meterName Quantity What can be done to view the billing information again? Thanks so much! 
Hello! We have two offices: Head and Branch. The Head office has a Splunk infrastructure (Enterprise Security), which is managed by their admins. The Branch office is in the process of deploying a S... See more...
Hello! We have two offices: Head and Branch. The Head office has a Splunk infrastructure (Enterprise Security), which is managed by their admins. The Branch office is in the process of deploying a Splunk system that must then be managed by Branch administrators. The licensing server is located at the Head office and the Branch license will be uploaded to it. At the same time, it became necessary to send part of the data from the Branch to the Head office, for this purpose the Heavy Forwarder will be installed in the Branch, which will initially receive data from all sources. Then the HF will send all the data to the indexer of the Branch and in parallel send a part of it to the Head office. The main catch is that administrators in both offices want complete control over their systems. That is, no one from the Head office will be able to influence the Branch system in any way and vice versa. It turns out that the indexers of both offices are not connected in any way and the license for the data sent to the Head office will be measured twice (once when indexing on the Branch indexer and the second time on the Head indexer. How in this case should you organize the architecture correctly so that the administrators of each office retain their control system, but the license was not measured twice on the data that will be sent to the main office?  
Strangest thing. I have some Infoblox logs coming in from a Syslog-NG server where we have a UF installed. UF is successfully sending the Infoblox logs to Splunk BUT, I can only see those logs when d... See more...
Strangest thing. I have some Infoblox logs coming in from a Syslog-NG server where we have a UF installed. UF is successfully sending the Infoblox logs to Splunk BUT, I can only see those logs when doing an alltime-realtime search but can't see them anywhere when doing a historical alltime search even when logged in as admin. I can search other logs in the same index but just comes back with "0 events" and no errors in the job - just nothing. Can't find them via sourcetype, source or host. Any ideas? I know the data is there but just can't see it on historical searches. 
Hello - I need to install and test a Private App on my Splunk Cloud trial but per the documentation, https://docs.splunk.com/Documentation/SplunkCloud/8.2.2104/User/PrivateApps, I do not have the opt... See more...
Hello - I need to install and test a Private App on my Splunk Cloud trial but per the documentation, https://docs.splunk.com/Documentation/SplunkCloud/8.2.2104/User/PrivateApps, I do not have the option to Upload the App. I also did not see the option after trying to "Create app". Splunk Cloud Version: 8.0.2006 I would appreciate any suggestions on how to upload a private app to Splunk Cloud! Thanks.
Hi Team,   As part of audit question - can we remove the admin write role from few of reports and alerts...we don't want any one to do change including admin..how can we achieve this from configura... See more...
Hi Team,   As part of audit question - can we remove the admin write role from few of reports and alerts...we don't want any one to do change including admin..how can we achieve this from configuration end? And also want to remove clone option from UI to all reports and alerts from configuration
Hi community, is it possible to calculate the time between info_max_time and info_min_time according to the period chosen in the button "select time" but removing the weekends contained in the perio... See more...
Hi community, is it possible to calculate the time between info_max_time and info_min_time according to the period chosen in the button "select time" but removing the weekends contained in the period. for example I run a search like this: sourcetype = XXXXXXX | `transactions` | eval date_wday = strftime (_time, "% w% H") | search date_wday> = 106 AND date_wday <= 523 | I take the events between Monday 6 a.m. and Friday 10 p.m. if my select time equals "the last month" so April then I have to do : | eval period = info_max_time - info_min_time | and delete from my variable "period" the weekends < for the last month period must be = 22 days because 30 days of the month - 8 days of weekends. I want the calculation to be systematic according to the period chosen in the button "select time" can you help me please
Hello, I am wondering if on a dedicated Search Head with Splunk Enterprise Security it is better or not to enable Hyper-threading. Our server is a blade with a dedicated VM with 2x20 physical cor... See more...
Hello, I am wondering if on a dedicated Search Head with Splunk Enterprise Security it is better or not to enable Hyper-threading. Our server is a blade with a dedicated VM with 2x20 physical core CPU Intel Xeon 6148 + 96GB RAM (we can increase the RAM if necessary up to 256GB). I guess with 40 physical cores searches could be faster, and with 80 virtual cores there will be more "space" to perform concurrent searches. So better having 40 physical cores OR 80 virtual cores? Is there any study showing pros and cons? Thanks a lot, Edoardo
I have a need to pull all the users and the files/sourcetype or queries they ran to export data out of splunk I found one query but that doesnt tell me what query they ran  index=_internal file=exp... See more...
I have a need to pull all the users and the files/sourcetype or queries they ran to export data out of splunk I found one query but that doesnt tell me what query they ran  index=_internal file=export | fields file user uri_path | table file user uri_path 
What are the ways of optimising the dashboard other than base search method?
Hi, i need to ingest azure devops data to my splunk instance and this is what i get in the azure-devops.log: 2021-05-12 17:20:36,494 ERROR pid=516 tid=MainThread file=base_modinput.py:log_error:30... See more...
Hi, i need to ingest azure devops data to my splunk instance and this is what i get in the azure-devops.log: 2021-05-12 17:20:36,494 ERROR pid=516 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-azure-devops-add-on\bin\ta_azure_devops_add_on\modinput_wrapper\base_modinput.py", line 127, in stream_events self.collect_events(ew) File "C:\Program Files\Splunk\etc\apps\TA-azure-devops-add-on\bin\azure_devops_main.py", line 88, in collect_events input_module.collect_events(self, ew) File "C:\Program Files\Splunk\etc\apps\TA-azure-devops-add-on\bin\input_module_azure_devops_main.py", line 593, in collect_events ts_json = response.json() File "C:\Program Files\Splunk\etc\apps\TA-azure-devops-add-on\bin\ta_azure_devops_add_on\requests\models.py", line 850, in json return complexjson.loads(self.text, **kwargs) File "C:\Program Files\Splunk\Python-2.7\lib\json_init_.py", line 339, in loads return _default_decoder.decode(s) File "C:\Program Files\Splunk\Python-2.7\lib\json\decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Program Files\Splunk\Python-2.7\lib\json\decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded Does anybody know why?
Hello I have a query that examins events can outputs how many of each level of event there are index=* eval level=lower(level) | stats count by level It works fine, but in the output there are som... See more...
Hello I have a query that examins events can outputs how many of each level of event there are index=* eval level=lower(level) | stats count by level It works fine, but in the output there are some results that should be treated/merged level count debug 10 error 15 fatal 1 info 30 information 40 trace 2 warn 70 warning 75   'info' and 'information' are the same thing, 'warn' and 'warning' are also the same thing Is there any way to extend/modify this query to have the combined count for 'info' and 'information'  in a single result, and the same for 'warn' and 'warning' ? Something lik this ? level count debug 10 error 15 fatal 1 info 70 trace 2 warn 145   Many thanks _scott