All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do you have any base searches? Are any of the panels driven by saved searches?  
@Priya70  If the search returns a large dataset or the panel uses a complex visualization rendering might silently fail or else Memory or CPU limitations in the browser can cause rendering to hang, ... See more...
@Priya70  If the search returns a large dataset or the panel uses a complex visualization rendering might silently fail or else Memory or CPU limitations in the browser can cause rendering to hang, especially with multiple panels loading simultaneously. If multiple panels use similar base searches, consider using a base search with postProcess to reduce load. Can you please paste your dashboard XML to identify the issue? I’ll take a look..
Hi,
How did you figure out that the problem was with that firewall search? You are doing lots of appends, which is always going to slow down any search, as you are pushing everything to the search head,... See more...
How did you figure out that the problem was with that firewall search? You are doing lots of appends, which is always going to slow down any search, as you are pushing everything to the search head, but also you are doing an unbounded 6 month transaction statement, which again is going to be pretty slow and potentially unpredictable. What are your data volumes for each individual search? The separate times for your 6 individual searches will not sum up to give you the expected cost of the overall search and I would not actually expect the tstats searches per se to be the source of any performance problem. If you are searching summaries only from accelerated datamodels, they should be the least of your worries. Your CrowdStrike search is getting ALL the data for 6 months and running transaction on that, which if you have any kind of volume there, is unlikely to be reliable, because transaction will silently drop results when it hits limits.  
@danielbb I wasn't able to find anything; we need to build a brand new one. You should be able to quickly build one using Dashboard examples. If you have ondemand credits on your account entitlement... See more...
@danielbb I wasn't able to find anything; we need to build a brand new one. You should be able to quickly build one using Dashboard examples. If you have ondemand credits on your account entitlement, you can also leverage PS expert service to build dashboards; they can shoulder surf to get you started. Assets Inventory Example:   index=<your_tenable_index> sourcetype=tenable:io:assets| eval ip=mvindex(ipv4, 0) | stats count by hostname, ip, os, last_seen, tags Plugin Overview Example:   index=<your_tenable_index> sourcetype=tenable:io:plugin | stats count by plugin_name, plugin_id, family Audit Log Events Example:   index=<your_tenable_index> sourcetype=tenable:io:audit_logs | timechart count by action
1. As @richgalloway mentioned, a (typically small but it realy depends on the input) lag between _time and _indextime is a normal state. Or at least on its own it doesn't mean that something is wrong... See more...
1. As @richgalloway mentioned, a (typically small but it realy depends on the input) lag between _time and _indextime is a normal state. Or at least on its own it doesn't mean that something is wrong. 2. DATETIME_CONFIG=none explicitly disables timestamp recognition. Are you sure it is what you want? 3. If there is a difference between the timestamp included in the raw event and the timestamp stored in the _time field, the data is not properly onboarded. Tenable.io is a cloud service so I suppose there is some modular input which pulls the data from the cloud and pushes them to Splunk. But I have no idea whether the timestamps should be parsed by the input itself and fed "as is" to Splunk or if the data should be parsed in Splunk. Infortunately, it's a third party add-on so there can be completely everything happening inside...
Thank you @richgalloway    My question is, why an app sets something like? [tenable:io:vuln] DATETIME_CONFIG = NONE   That's what this tenable TA does, I don't get it.
DATETIME_CONFIG = CURRENT != NONE  Current set the timestamp from the Aggregation queue time.  None in this instance sets the timestamp from the time handed over to Splunk by the modular input sc... See more...
DATETIME_CONFIG = CURRENT != NONE  Current set the timestamp from the Aggregation queue time.  None in this instance sets the timestamp from the time handed over to Splunk by the modular input script.  Splunk then still needs to send the data to an indexer, which is where the _indextime will be set. Yes the data is cooked and time set on the HeavyForwarder but note the _indextime is NOT.  An easy way to see this in action is to look at any of your data being ingested by DBConnect with _time being set to Current. The _indextime will usually be negative,  but every once in a while you'll see it jump to a few seconds usually due to a blocked output queue. And of course any difference between the HeavyForwarder and Indexer time will of course cause times to be off as well. 
Are you sure this is your literal search? Because you cannot pipe to a tstats command unless it's with prestats=t append=t. Also, what does your security_content_summariesonly macro expand to? Also... See more...
Are you sure this is your literal search? Because you cannot pipe to a tstats command unless it's with prestats=t append=t. Also, what does your security_content_summariesonly macro expand to? Also also - you're appending two "full" searches. Are you sure you're not hitting subsearch limits? And back to the point - that's what job details and job log are for - see the timeline, see where Splunk spends its time. Check the scanned results vs. returned results...
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process t... See more...
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process that sends recurring email alerts every 5 minutes if the service remains degraded and hasn't reverted back to normal. From my research, many forums and documentation suggest achieving this through Correlation Searches. However, since we rely on KPI alerting, and none of our Correlation Searches (even the out-of-the-box ones) seem to function properly, this approach hasn't worked for us... Given the critical nature of the services we monitor, we’re seeking guidance on setting up recurring alerts using NEAPs or any other reliable method within Splunk ITSI. Any assistance or insights on how to configure this would be greatly appreciated.
Hi everyone! I am working on building a dashboard which captures all the firewall, Web proxy, EDR, WAF, Email, DLP blocked for last 6 months in a table format which should look like this -  I... See more...
Hi everyone! I am working on building a dashboard which captures all the firewall, Web proxy, EDR, WAF, Email, DLP blocked for last 6 months in a table format which should look like this -  I am able to write the query which will give me the count for each parameter and then I append all the single query into one which is making the final query run slower and taking forever to complete. Here is the final query- | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall" | tstats `security_content_summariesonly` count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon latest=now by _time | eval Source="WAF" | append [search index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time | eval Source="Web Proxy"] | append [| tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email"] | append [search index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | transaction "event.DetectId" | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time | eval Source="EDR"] | append [search index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time | eval Source="DLP"] | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=7 | streamstats count as month_offset | eval start_epoch=relative_time(now(),"-6mon@mon"), end_epoch=now() | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]   I figured out the issue is with the firewall query- | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall" Can someone guide me how to fix this issue. I have been stuck in this issue from 2 weeks
You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "... See more...
You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_5sPrf0wX": { "dataSources": { "primary": "ds_jq4P4CeS" }, "options": { "showIndependentYRanges": true, "showSplitSeries": true, "yAxisMajorTickSize": 4 }, "type": "splunk.line" } }, "dataSources": { "ds_jq4P4CeS": { "name": "Search_1", "options": { "query": "index=_internal \n| eval sourcetype=sourcetype.\"##\".log_level\n| timechart count by sourcetype" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_5sPrf0wX", "position": { "h": 960, "w": 1240, "x": 0, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }
It's normal for _indextime to not exactly match _time since there's always a delay from event transmission and processing.  How big is the lag you see?
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And... See more...
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And the app Tenable App for Splunk at https://splunkbase.splunk.com/app/4061 seems to present only the tenable:io:vuln sourcetype. Are there any other presentations, by any chance, for the assets, plugin, and audit_logs data?
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following S... See more...
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following SPL: index=tenable sourcetype="tenable:io:vuln" | eval lag = _indextime - _time We are seeing non-zero lag values, even though I expect the lag to be zero if _time truly equals _indextime. If anything, I would expect DATETIME_CONFIG = CURRENT, what am I missing?
I noticed that i dont have splunk python SDK because in my python script i dont have import splunklib or import splunk.. i am using python script within the alteryx tool  
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20... See more...
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20%0Asourcetype%3D%22cfs_apigee_102212_st%22%20%20%0Aearliest%3D-1d%40d%20latest%3D%40d%20%0Aorganization%20IN%20(%22ccb-na%22%2C%22ccb-na-ext%22)%20%0AclientId%3D%22AMZ%22%20%0Astatus_code%3D200%0Aenvironment%3D%22XYZ-uat03%22%0A%7C%20table%20%20_time%2CclientId%2Corganization%2Cenvironment%2CproxyBasePath%2Capi_name&&output_mode=csv 
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises ... See more...
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises I can get to it this week, but I'll try soon.  Splunk is not really part of my regular work or hobby life anymore, so not something I put much focus on these days.
Hi, any guidance on this..
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuer... See more...
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuery 3.5.x or later due to security vulnerabilities in older versions. Version 2.0.0 of the Modal Text Message App uses an older version of jQuery, you may be able to contact the developer @rjthibod who may be able to resolve this with an updated version. Splunk announced with version 8.2.0 that apps need to move to jQuery 3.5 as older versions will be removed from future versions of Splunk. This was a number of versions ago so it may well be removed soon if not already which would cause the app to stop working. Check out https://docs.splunk.com/Documentation/UpgradejQuery/1/UpgradejQuery/jQueryOverview for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing